← Back to team overview

openlp-core team mailing list archive

[Merge] lp:~supagu/openlp/planningcenter-branch into lp:openlp

 

Fabian Mathews has proposed merging lp:~supagu/openlp/planningcenter-branch into lp:openlp.

Commit message:
Added planningcenter plugin

Requested reviews:
  OpenLP Core (openlp-core)

For more details, see:
https://code.launchpad.net/~supagu/openlp/planningcenter-branch/+merge/272094

In the plugins directory I added "planningcenter". This plugin adds support for planning center and also adds support to be able to download videos from youtube via planning center.

NOTE: that in the lib directory I have included "youtube-dl" which is a python module I grabbed off the internet. I included it this way to avoid having to install it. It should probably be changed to be an installed module.

Related forum post: http://forums.openlp.org/discussion/2705/planning-center-and-youtube-download-plugin


-- 
The attached diff has been truncated due to its size.
Your team OpenLP Core is requested to review the proposed merge of lp:~supagu/openlp/planningcenter-branch into lp:openlp.
=== added directory 'openlp/plugins/planningcenter'
=== added file 'openlp/plugins/planningcenter/__init__.py'
--- openlp/plugins/planningcenter/__init__.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/__init__.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,24 @@
+# -*- coding: utf-8 -*-
+# vim: autoindent shiftwidth=4 expandtab textwidth=120 tabstop=4 softtabstop=4
+
+###############################################################################
+# OpenLP - Open Source Lyrics Projection                                      #
+# --------------------------------------------------------------------------- #
+# Copyright (c) 2008-2015 OpenLP Developers                                   #
+# --------------------------------------------------------------------------- #
+# This program is free software; you can redistribute it and/or modify it     #
+# under the terms of the GNU General Public License as published by the Free  #
+# Software Foundation; version 2 of the License.                              #
+#                                                                             #
+# This program is distributed in the hope that it will be useful, but WITHOUT #
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or       #
+# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for    #
+# more details.                                                               #
+#                                                                             #
+# You should have received a copy of the GNU General Public License along     #
+# with this program; if not, write to the Free Software Foundation, Inc., 59  #
+# Temple Place, Suite 330, Boston, MA 02111-1307 USA                          #
+###############################################################################
+"""
+The :mod:`alerts` module provides the Alerts plugin for producing impromptu on-screen announcements during a service.
+"""

=== added directory 'openlp/plugins/planningcenter/forms'
=== added file 'openlp/plugins/planningcenter/forms/__init__.py'
--- openlp/plugins/planningcenter/forms/__init__.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/forms/__init__.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,48 @@
+# -*- coding: utf-8 -*-
+# vim: autoindent shiftwidth=4 expandtab textwidth=120 tabstop=4 softtabstop=4
+
+###############################################################################
+# OpenLP - Open Source Lyrics Projection                                      #
+# --------------------------------------------------------------------------- #
+# Copyright (c) 2008-2015 OpenLP Developers                                   #
+# --------------------------------------------------------------------------- #
+# This program is free software; you can redistribute it and/or modify it     #
+# under the terms of the GNU General Public License as published by the Free  #
+# Software Foundation; version 2 of the License.                              #
+#                                                                             #
+# This program is distributed in the hope that it will be useful, but WITHOUT #
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or       #
+# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for    #
+# more details.                                                               #
+#                                                                             #
+# You should have received a copy of the GNU General Public License along     #
+# with this program; if not, write to the Free Software Foundation, Inc., 59  #
+# Temple Place, Suite 330, Boston, MA 02111-1307 USA                          #
+###############################################################################
+"""
+Forms in OpenLP are made up of two classes. One class holds all the graphical elements, like buttons and lists, and the
+other class holds all the functional code, like slots and loading and saving.
+
+The first class, commonly known as the **Dialog** class, is typically named ``Ui_<name>Dialog``. It is a slightly
+modified version of the class that the ``pyuic4`` command produces from Qt4's .ui file. Typical modifications will be
+converting most strings from "" to '' and using OpenLP's ``translate()`` function for translating strings.
+
+The second class, commonly known as the **Form** class, is typically named ``<name>Form``. This class is the one which
+is instantiated and used. It uses dual inheritance to inherit from (usually) QtGui.QDialog and the Ui class mentioned
+above, like so::
+
+    class AuthorsForm(QtGui.QDialog, Ui_AuthorsDialog):
+
+        def __init__(self, parent=None):
+            super(AuthorsForm, self).__init__(parent)
+            self.setupUi(self)
+
+This allows OpenLP to use ``self.object`` for all the GUI elements while keeping them separate from the functionality,
+so that it is easier to recreate the GUI from the .ui files later if necessary.
+"""
+
+from .youtubedownloaddialog import YoutubeDownloadDialog
+from .logindialog import LoginDialog
+from .planimportdialog import PlanImportDialog
+from .maindialog import MainDialog
+

=== added file 'openlp/plugins/planningcenter/forms/logindialog.py'
--- openlp/plugins/planningcenter/forms/logindialog.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/forms/logindialog.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,84 @@
+# -*- coding: utf-8 -*-
+# vim: autoindent shiftwidth=4 expandtab textwidth=120 tabstop=4 softtabstop=4
+
+###############################################################################
+# OpenLP - Open Source Lyrics Projection                                      #
+# --------------------------------------------------------------------------- #
+# Copyright (c) 2008-2015 OpenLP Developers                                   #
+# --------------------------------------------------------------------------- #
+# This program is free software; you can redistribute it and/or modify it     #
+# under the terms of the GNU General Public License as published by the Free  #
+# Software Foundation; version 2 of the License.                              #
+#                                                                             #
+# This program is distributed in the hope that it will be useful, but WITHOUT #
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or       #
+# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for    #
+# more details.                                                               #
+#                                                                             #
+# You should have received a copy of the GNU General Public License along     #
+# with this program; if not, write to the Free Software Foundation, Inc., 59  #
+# Temple Place, Suite 330, Boston, MA 02111-1307 USA                          #
+###############################################################################
+
+import os
+
+from PyQt4 import QtGui, QtCore, uic
+
+from openlp.core.common import Settings
+from openlp.core.common import AppLocation, translate
+from openlp.plugins.planningcenter.lib import Session
+
+class LoginDialog(QtGui.QDialog):
+    """
+    Provide UI for displaying plans and schedules
+    """
+    def __init__(self, parent):
+        """
+        Initialise the alert form
+        """
+        extra_settings = {
+            'planningcenter/login email': ""
+        }
+        Settings.extend_default_settings(extra_settings)
+        QtGui.QDialog.__init__(self, parent)
+        self.setupUi()
+
+    def setupUi(self):
+        """
+        Setup the UI
+        """
+        uiPath = os.path.join(AppLocation.get_directory(AppLocation.PluginsDir),
+                                     'planningcenter', 'resources', 'logindialog.ui')
+        uic.loadUi(uiPath, self)
+        self.setWindowIcon(self.parent().windowIcon())
+
+        settings = Settings()
+        settings.beginGroup('planningcenter')
+        self.emailEdit.setText(settings.value('login email'))
+        settings.endGroup()
+
+        # if there is a an email address, then give focus to password field
+        if len(self.emailEdit.text()) > 0:
+            self.passwordEdit.setFocus();
+
+
+    def exec_(self):
+        """
+        Execute the dialog and return the exit code.
+        """
+        return QtGui.QDialog.exec_(self)
+
+    def accept(self):
+        s = Session()
+        s.login(self.emailEdit.text(), self.passwordEdit.text()) 
+        if s.isLoggedIn():
+            settings = Settings()
+            settings.beginGroup('planningcenter')
+            settings.setValue('login email', self.emailEdit.text())
+            settings.endGroup()
+            QtGui.QDialog.accept(self)
+        else:
+            QtGui.QMessageBox.information(self,
+                                          translate('PlanningCenterPlugin.LoginDialog', 'Login Failed'),
+                                          translate('PlanningCenterPlugin.LoginDialog',
+                                                    'Login failed, please ensure you have entered the correct email and password.'))

=== added file 'openlp/plugins/planningcenter/forms/maindialog.py'
--- openlp/plugins/planningcenter/forms/maindialog.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/forms/maindialog.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,121 @@
+# -*- coding: utf-8 -*-
+# vim: autoindent shiftwidth=4 expandtab textwidth=120 tabstop=4 softtabstop=4
+
+###############################################################################
+# OpenLP - Open Source Lyrics Projection                                      #
+# --------------------------------------------------------------------------- #
+# Copyright (c) 2008-2015 OpenLP Developers                                   #
+# --------------------------------------------------------------------------- #
+# This program is free software; you can redistribute it and/or modify it     #
+# under the terms of the GNU General Public License as published by the Free  #
+# Software Foundation; version 2 of the License.                              #
+#                                                                             #
+# This program is distributed in the hope that it will be useful, but WITHOUT #
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or       #
+# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for    #
+# more details.                                                               #
+#                                                                             #
+# You should have received a copy of the GNU General Public License along     #
+# with this program; if not, write to the Free Software Foundation, Inc., 59  #
+# Temple Place, Suite 330, Boston, MA 02111-1307 USA                          #
+###############################################################################
+
+import os
+import re
+import logging
+
+from PyQt4 import QtGui, QtCore, uic
+
+from openlp.core.common import Registry, AppLocation, translate
+from openlp.plugins.planningcenter.forms import LoginDialog, PlanImportDialog
+from openlp.plugins.planningcenter.lib import Session
+
+class MainDialog(QtGui.QDialog):
+    """
+    Provide UI for displaying plans and schedules
+    """
+    def __init__(self, plugin):
+        """
+        Initialise the alert form
+        """
+        QtGui.QDialog.__init__(self, plugin.main_window)
+        self.manager = plugin.manager
+        self.plugin = plugin
+        self.item_id = None
+        self.setupUi()
+
+    def setupUi(self):
+        """
+        Setup the UI
+        """
+        uiPath = os.path.join(AppLocation.get_directory(AppLocation.PluginsDir),
+                                     'planningcenter', 'resources', 'maindialog.ui')
+        uic.loadUi(uiPath, self)
+        self.setWindowIcon(QtGui.QIcon(self.plugin.iconPath()))
+
+    def exec_(self):
+        """
+        Execute the dialog and return the exit code.
+        """
+        self.treeWidget.clear()
+        QtCore.QTimer.singleShot(100, self.updatePlans)
+        return QtGui.QDialog.exec_(self)
+
+    def login(self):
+        """
+        Display the login dialog to let the user login
+        """
+        dlg = LoginDialog(self)
+        dlg.exec_()
+
+    def updatePlans(self):
+        """
+        Grab the plans from planning center website and populate the tree widget
+        """
+        s = Session()
+        if not s.isLoggedIn():
+            self.login()
+
+        # user cancelled, so abort
+        if not s.isLoggedIn():
+            self.close()
+            return
+
+        # populate the dashboard widget
+        organisation = s.organisation()
+
+        for serviceType in organisation["service_types"]:
+            serviceTypePlans = s.serviceTypePlans(serviceType["id"])
+
+            serviceTypeItem = QtGui.QTreeWidgetItem()
+            serviceTypeItem.setText(0, serviceType["name"])
+
+            icon = QtGui.QIcon(self.planIconPath())
+            serviceTypeItem.setIcon(0, icon)
+
+            self.treeWidget.invisibleRootItem().addChild(serviceTypeItem)
+
+            for plan in serviceTypePlans:
+                planItem = QtGui.QTreeWidgetItem()
+                planItem.setText(0, plan["dates"])
+                planItem.setText(1, str(plan["id"]))
+                serviceTypeItem.addChild(planItem)
+
+        self.treeWidget.expandAll()
+
+
+    def on_treeWidget_itemDoubleClicked(self, item, column):
+        """
+        Time to import the plan
+        """
+        dlg = PlanImportDialog(self, item.text(1))
+        dlg.exec_()
+
+    def planIconPath(self):
+        """
+        Get URL for plugin icon
+        """
+        return os.path.join(AppLocation.get_directory(AppLocation.PluginsDir),
+                                     'planningcenter', 'resources', 'planicon.png')
+
+

=== added file 'openlp/plugins/planningcenter/forms/planimportdialog.py'
--- openlp/plugins/planningcenter/forms/planimportdialog.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/forms/planimportdialog.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,685 @@
+# -*- coding: utf-8 -*-
+# vim: autoindent shiftwidth=4 expandtab textwidth=120 tabstop=4 softtabstop=4
+
+###############################################################################
+# OpenLP - Open Source Lyrics Projection                                      #
+# --------------------------------------------------------------------------- #
+# Copyright (c) 2008-2015 OpenLP Developers                                   #
+# --------------------------------------------------------------------------- #
+# This program is free software; you can redistribute it and/or modify it     #
+# under the terms of the GNU General Public License as published by the Free  #
+# Software Foundation; version 2 of the License.                              #
+#                                                                             #
+# This program is distributed in the hope that it will be useful, but WITHOUT #
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or       #
+# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for    #
+# more details.                                                               #
+#                                                                             #
+# You should have received a copy of the GNU General Public License along     #
+# with this program; if not, write to the Free Software Foundation, Inc., 59  #
+# Temple Place, Suite 330, Boston, MA 02111-1307 USA                          #
+###############################################################################
+
+import os
+import types
+import re
+
+from PyQt4 import QtGui, QtCore, uic
+
+from openlp.core.common import Settings
+from openlp.core.common import AppLocation, translate
+from openlp.plugins.planningcenter.lib import Session
+from openlp.plugins.planningcenter.lib import Youtube
+from openlp.core.common import Registry, RegistryProperties
+from openlp.plugins.songs.lib.db import Author
+from openlp.plugins.custom.lib import CustomXMLBuilder, CustomXMLParser
+from openlp.plugins.custom.lib.db import CustomSlide
+
+class PlanImportDialog(QtGui.QDialog, RegistryProperties):
+    """
+    Provide UI for displaying plans and schedules
+    """
+    def __init__(self, parent, planId):
+        """
+        Initialise the alert form
+        """
+        QtGui.QDialog.__init__(self, parent)
+        self.planId = planId
+        self.setupUi()
+        
+    def setupUi(self):
+        """
+        Setup the UI
+        """
+        uiPath = os.path.join(AppLocation.get_directory(AppLocation.PluginsDir),
+                                     'planningcenter', 'resources', 'planimportdialog.ui')
+        uic.loadUi(uiPath, self)
+        self.setWindowIcon(self.parent().windowIcon())
+        self.progressBar.setValue(0)
+        self.progressLabel.setText("Processing plan items...")
+        self.secondLabel.setText("")
+        self.textEdit.setText("")
+        self.openDirButton.setVisible(False)
+
+    def exec_(self):
+        """
+        Execute the dialog and return the exit code.
+        """
+        QtCore.QTimer.singleShot(100, self.importPlan)
+        return QtGui.QDialog.exec_(self)
+
+    def importPlan(self):
+        """
+        Download the media and import it 
+        """
+        # item count sucks, really should go add the media size of all items
+        # or add a loading bar per item
+        s = Session()
+        plan = s.plan(self.planId)
+
+        self.progressCanceled = False
+        self.progressComplete = False
+
+        # setup the absolute output directory from settings
+        settings = Settings()
+        settings.beginGroup('planningcenter')
+        self.outputDirectory = settings.value('media directory')
+        if self.outputDirectory != None and len(self.outputDirectory) > 0:
+            self.outputDirectory = os.path.abspath(self.outputDirectory)
+        else:
+            self.outputDirectory = os.path.abspath(plan["dates"])
+
+        settings.endGroup()
+
+
+        
+        self.textEdit.setText("Creating output directory: " + self.outputDirectory + "\n")
+
+        # create the directory
+        if not os.path.exists(self.outputDirectory):
+            os.makedirs(self.outputDirectory)
+
+        self.downloadAllItems(plan)
+
+        # clean up downloaded files?
+        if self.progressCanceled:
+            self.textEdit.append("\nCanceled by user")
+            self.textEdit.setText("Canceled by user")
+            self.close()
+
+        # save out the service
+        self.service_manager._save_lite = False
+        self.service_manager._file_name = os.path.join(self.outputDirectory, plan["dates"] + ".osz")
+        self.service_manager.decide_save_method()
+
+        self.textEdit.append("\nComplete")
+        self.progressLabel.setText("Complete")
+
+        # this gives the chance for the user to check any log output?
+        self.button.setText("Ok")
+        self.progressComplete = True
+
+        self.openDirButton.setVisible(True)
+
+    def downloadAllItems(self, plan):
+        s = Session()
+        
+        processedItems = []
+
+        # iterate over each item and gather media and songs for download
+        i = 0
+        itemsCount = len(plan["items"])
+        for item in plan["items"]:
+            i = i + 1
+            self.secondLabel.setText("Processing item " + str(i) + " of " + str(itemsCount))
+
+            percent = (i / float(itemsCount)) * 100
+            self.progressBar.setValue(percent)
+
+            if item["type"] == "PlanMedia":
+                itemMedia = item["plan_item_medias"]
+                # no media
+                if len(itemMedia) <= 0:
+                    continue
+
+                media = s.media(itemMedia[0]["media_id"])
+
+                # Youtube url's are "public_url"
+                mediaPublicUrl = None
+                mediaUrl = None
+                if "public_url" in media["attachments"][0]:
+                    mediaPublicUrl = media["attachments"][0]["public_url"]
+                else:
+                    mediaUrl = media["attachments"][0]["url"]
+
+                fileName = media["attachments"][0]["filename"]
+
+                processedItems.append({"type": "PlanMedia", "mediaUrl": mediaUrl, "mediaPublicUrl": mediaPublicUrl, "fileName": fileName})
+
+            elif item["type"] == "PlanSong":
+                title = item["title"]
+                author = item["song"]["author"]
+                arrangementId = item["arrangement"]["id"]
+                arrangement = s.arrangement(arrangementId)
+                lyrics = arrangement["chord_chart"]
+                sequence = arrangement["sequence_to_s"]
+                processedItems.append({"type": "PlanSong", "title": title, "author": author, "lyrics": lyrics, "sequence": sequence})
+
+            elif item["type"] == "PlanItem" and item["using_custom_slides"] == True:
+                title = item["title"] + " - " + plan["dates"] # add the date as custom slide are custom... plan specific
+                customSlides = item["custom_slides"]
+                processedItems.append({"type": "PlanCustomSlides", "title": title, "customSlides": customSlides})
+
+
+        i = 0
+        itemsCount = len(processedItems)
+        for item in processedItems:
+            i = i + 1
+            self.secondLabel.setText("Processing item " + str(i) + " of " + str(itemsCount))
+
+            if self.progressCanceled:
+                return
+
+            if item["type"] == "PlanMedia":  
+                self.progressLabel.setText("Downloading Media " + item["fileName"])
+
+                # when downloading from youtube, we dont know the extension till its downloaded
+                newFileName = self.downloadMedia(item["mediaUrl"], item["mediaPublicUrl"], item["fileName"])
+                if self.progressCanceled:
+                    return
+
+                okMsg = ""
+                if newFileName == None: 
+                    okMsg = " [FAIL]"
+
+                # add media to openLP
+                item["fileName"] = newFileName
+                #fullFileName = os.path.join(self.outputDirectory, item["fileName"])
+                self.addMediaToAppropriateManager(newFileName)
+
+                self.textEdit.append("Media Downloaded: " + item["fileName"] + okMsg)
+
+            elif item["type"] == "PlanSong":
+                self.progressLabel.setText("Processing Song " + item["title"])
+
+                ok = self.processSong(item["title"], item["author"], item["lyrics"], item["sequence"])
+
+                okMsg = ""
+                if ok != True: 
+                    okMsg = " [FAIL]"
+
+                self.textEdit.append("Song Processed: " + item["title"] + okMsg)
+
+            elif item["type"] == "PlanCustomSlides":
+                self.progressLabel.setText("Processing Custom Slides " + item["title"])
+                ok = self.processCustomSlides(item["title"], item["customSlides"])
+
+                okMsg = ""
+                if ok != True: 
+                    okMsg = " [FAIL]"
+
+                self.textEdit.append("Custom Slides Processed: " + item["title"] + okMsg)
+
+        return
+
+    def processCustomSlides(self, title, customSlides):
+        """
+        add the custom slides... code copied from custom/forms/editcustomform.py
+        """
+
+        # see if slides already exist, if so, delete them to avoid duplicates
+        item = self.custom.findItemByDisplayName(title)
+        if item != None:
+            self.custom.plugin.db_manager.delete_object(CustomSlide, item.data(QtCore.Qt.UserRole))
+            self.custom.on_search_text_button_clicked()
+
+        custom_slide = CustomSlide()
+        sxml = CustomXMLBuilder()
+        for count in range(len(customSlides)):
+            sxml.add_verse_to_lyrics('custom', str(count + 1), customSlides[count]['body'])
+        custom_slide.title = title
+        custom_slide.text = str(sxml.extract_xml(), 'utf-8')
+        custom_slide.credits = ''
+        custom_slide.theme_name = ''
+        success = self.custom.plugin.db_manager.save_object(custom_slide)
+        self.custom.auto_select_id = custom_slide.id
+
+        self.custom.on_clear_text_button_click()
+        self.custom.on_selection_change()
+
+        # add the slides to the service
+        item = self.custom.addMedia(custom_slide.id)
+        if item == None:
+            self.textEdit.append("Failed to add " + title)
+            return False
+
+        self.custom.add_to_service(item)
+
+        return success
+
+    # This searches the song for the lyrics
+    # for example Chorus or C is the sequence title, we return the text between Chorus and the next Verse for example
+    def findLyrics(self, sequenceTitle, lyrics):
+        sequenceSplitExp = re.compile('(\D*)(\d*)')
+        titleExp = re.compile('\(?(Verse|Chorus|Tag|Outro|Bridge|Misc)\)?\s?(\d?)|\(?(\S)(\d+)\)?')
+
+        sequenceTitleMatch = sequenceSplitExp.match(sequenceTitle)
+        if sequenceTitleMatch == None:
+            return ""
+
+        title = sequenceTitleMatch.group(1)
+        number = sequenceTitleMatch.group(2)
+        if number == "":
+            number = "1"
+
+        
+        endCharacter = len(lyrics)
+        startCharacter = endCharacter
+
+        titleFound = False
+        for match in titleExp.finditer(lyrics):
+
+            groupCount = len(match.groups())
+            titleLong = match.group(1) if (groupCount >= 1) else "Undefined"
+            titleShort = match.group(3) if (groupCount >= 3) else "Undefined"
+            if titleLong == None:
+                titleLong = "Undefined"
+
+            numberLong = match.group(2) if (groupCount >= 2) else ""
+            numberShort = match.group(4) if (groupCount >= 4) else ""
+
+            if numberLong == "" and numberShort == None:
+                numberLong = "1";
+
+            # ok, we found the next title in the song
+            if titleFound:
+                endCharacter = match.start()
+                break
+
+            # we found a match to the title, so now we need to just set titleFound
+            # and find the next match
+            if (title == titleLong or title == titleShort or titleLong.startswith(title)) and (number == numberLong or number == numberShort):
+                titleFound = True
+                startCharacter = match.end()
+
+        # now we know the lyrics to extract from the song are between
+        # startCharacter and endCharacter
+        extractedLyrics = lyrics[startCharacter:endCharacter]
+        extractedLyrics = extractedLyrics.strip('\n')
+        return extractedLyrics
+        
+
+    def processSong(self, title, authors, lyrics, sequence):
+        """
+        add the song
+        """
+        # emulate the user entering a new song
+        self.songs.edit_song_form.new_song()
+        self.songs.edit_song_form.title_edit.setText(title)
+
+        # emulate on_author_add_button_clicked
+        authorList = authors.split(",")
+        authorDisplayName = "Unknown Author"
+        firstName = ""
+        lastName = ""
+        if len(authorList) >= 1:
+            authorDisplayName = authorList[0].strip()
+            authorNameList = authorDisplayName.split(" ")
+            firstName = authorNameList[0] if len(authorNameList) > 1 else ""
+            lastName = authorNameList[1] if len(authorNameList) > 1 else ""
+            author = Author.populate(first_name=firstName, last_name=lastName, display_name=authorDisplayName)
+
+        # find existing song, if found simply use that
+        displayName = title + " (" + authorDisplayName + ")";
+        item = self.songs.findItemByDisplayName(displayName)
+        if item:
+            self.songs.add_to_service(item)
+            return True
+
+        self.songs.edit_song_form.manager.save_object(author)
+        self.songs.edit_song_form._add_author_to_list(author, "words")
+        self.songs.edit_song_form.load_authors()
+        self.songs.edit_song_form.authors_combo_box.setCurrentIndex(0)
+        
+        # setup each chrosu/verse into its own slide
+        # TODO: break it based on number of lines
+        sequenceList = sequence.split(", ")
+        compiledSequenceList = []
+        itemMap = {}
+        for s in sequenceList:
+            s = s.strip()
+            if len(s) == 1:
+                s += "1"
+
+            sLower = s.lower()
+            sLower = sLower.replace("tag", "ending") # openlp doesnt recognise 'tag'
+
+            if sLower not in itemMap:
+                extractedLyrics = self.findLyrics(s, lyrics)
+                if extractedLyrics == None or len(extractedLyrics) <= 0:
+                    continue
+
+                item = QtGui.QTableWidgetItem(extractedLyrics)
+                item.setData(QtCore.Qt.UserRole, sLower)
+                item.setText(extractedLyrics)
+                itemMap[sLower] = item
+                self.songs.edit_song_form.verse_list_widget.setRowCount(self.songs.edit_song_form.verse_list_widget.rowCount() + 1)
+                self.songs.edit_song_form.verse_list_widget.setItem(self.songs.edit_song_form.verse_list_widget.rowCount() - 1, 0, item)
+
+            if sLower in itemMap:
+                compiledSequenceList.append(sLower)
+
+        verseOrder = " ".join(compiledSequenceList)
+        self.songs.edit_song_form.verse_order_edit.setText(verseOrder)
+
+        # emulate song form accept method
+        self.songs.edit_song_form.clear_caches()
+        if not self.songs.edit_song_form._validate_song():
+            return False
+
+        self.songs.edit_song_form.save_song()
+        songId = self.songs.edit_song_form.song.id
+        self.songs.edit_song_form.song = None
+
+        # add new song to the list
+        self.songs.on_clear_text_button_click()
+
+        # get the item and add to the service manager
+        item = self.songs.findItem(songId)
+        self.songs.add_to_service(item)
+        return True
+
+    def addMediaToAppropriateManager(self, fileName):
+        """
+        given the file to the appropriate manager
+        """
+        self.addMedia(self.images, fileName)
+        self.addMedia(self.presentations, fileName)
+        self.addMedia(self.media, fileName)
+
+
+    def addMedia(self, manager, fileName):
+        # check file type is supported
+        if not manager.handlesFile(fileName):
+            return True
+
+        item = manager.addMedia(fileName)
+        if item == None:
+            self.textEdit.append("Failed to add " + fileName)
+            return False
+
+        manager.add_to_service(item)
+        return True
+
+          
+    @property
+    def songs(self):
+        """
+        Adds the songs plugin to the class dynamically
+        """
+        if not hasattr(self, '_songs') or not self._songs:
+            self._songs = Registry().get('songs')
+            monkeyPatchSongs(self._songs)
+        return self._songs
+          
+    @property
+    def presentations(self):
+        """
+        Adds the presentations plugin to the class dynamically
+        """
+        if not hasattr(self, '_presentations') or not self._presentations:
+            self._presentations = Registry().get('presentations')
+            monkeyPatchGeneric(self._presentations)
+            monkeyPatchPresentations(self._presentations)
+        return self._presentations
+
+    @property
+    def custom(self):
+        """
+        Adds the custom slides plugin to the class dynamically
+        """
+        if not hasattr(self, '_custom') or not self._custom:
+            self._custom = Registry().get('custom')
+            monkeyPatchGeneric(self._custom)
+            monkeyPatchCustom(self._custom)
+        return self._custom
+
+    @property
+    def media(self):
+        """
+        Adds the media plugin to the class dynamically
+        """
+        if not hasattr(self, '_media') or not self._media:
+            self._media = Registry().get('media')
+            monkeyPatchGeneric(self._media)
+            monkeyPatchMedia(self._media)
+        return self._media
+
+    @property
+    def images(self):
+        """
+        Adds the images plugin to the class dynamically
+        """
+        if not hasattr(self, '_images') or not self._images:
+            self._images = Registry().get('images')
+            monkeyPatchGeneric(self._images)
+            monkeyPatchImages(self._images)
+        return self._images
+
+    def downloadMedia(self, url, publicUrl, fileName):
+        """
+        download a single media item
+        """
+
+        # testing only! no way to guarantee file has not changed
+        # this will just speed up development though
+        # TODO: Can we do some date check?
+        fullFileName = os.path.join(self.outputDirectory, fileName)
+        if os.path.exists(fullFileName):
+            return fullFileName
+
+        if publicUrl != None:
+            return self.downloadPublicMedia(publicUrl, fullFileName)
+
+        return self.downloadPrivateMedia(url, fullFileName)
+
+    def downloadPublicMedia(self, url, fileName):
+        """
+        download a single media item from an external website
+        """
+        y = Youtube()
+        return y.download(url, fileName)
+
+    def downloadPrivateMedia(self, url, fileName):
+        """
+        download a single media item from planning center website
+        """
+        s = Session()
+        r = s.urlStream(url)
+        if r.status_code != 200:
+            return None
+
+        bytes = 0
+        contentLength = r.headers['content-length']
+        with open(fileName + ".part", 'wb') as f:
+            for chunk in r.iter_content(chunk_size=10240):
+                if chunk: # filter out keep-alive new chunks
+                    bytes += len(chunk)
+                    f.write(chunk)
+                    f.flush()
+
+                    percent = (bytes / float(contentLength)) * 100
+                    self.progressBar.setValue(percent)
+
+                if self.progressCanceled:
+                    return None
+
+        os.rename(fileName + ".part", fileName)
+        return fileName
+
+    def on_button_clicked(self):
+        """
+        close or abort
+        """
+        if self.progressComplete:
+            self.close()
+        else:
+            self.progressCanceled = True
+
+    def on_openDirButton_clicked(self):
+        """
+        open explorer to allow us to copy to usb stick or what ever
+        """
+        QtGui.QDesktopServices.openUrl(QtCore.QUrl(self.outputDirectory))
+        
+
+
+"""
+Monkey patch for all managers
+"""
+def monkeyPatchGeneric(target):
+    def addMedia(self, fileName):
+        """
+        Add the file, if it exists override it incase its changed and return the item handle for the added item
+        """
+        existingItem = self.findItem(fileName)
+        if existingItem != None:
+            return existingItem
+
+        self.validate_and_load([fileName])
+        return self.findItem(fileName)
+
+    def handlesFile(target, fileName):
+        """
+        Using drag and drop support, determine if the manager can handle the given file type
+        """
+        file_type = fileName.split('.')[-1]
+        if file_type.lower() not in target.on_new_file_masks:
+            return False
+
+        return True
+
+    def findItem(self, fileName):
+        """
+        Find item given a filename - must be implemented
+        """
+        return None
+
+    target.addMedia = types.MethodType(addMedia, target)
+    target.handlesFile = types.MethodType(handlesFile, target)
+    target.findItem = types.MethodType(findItem, target)
+
+"""
+Monkey patch for 'image' manager
+"""
+def monkeyPatchImages(target):
+    def addMedia(self, fileName):
+        """
+        Add the file, if it exists override it incase its changed and return the item handle for the added item
+        """
+        existingItem = self.findItem(fileName)
+        if existingItem != None:
+            return existingItem
+
+        self.save_new_images_list([fileName])
+        return self.findItem(fileName)
+
+    def findItem(self, fileName):
+        """
+        Find item given a filename
+        """
+        for count in range(self.list_view.count()):
+            item = self.list_view.item(count)
+            itemFileName = item.data(0, QtCore.Qt.UserRole).filename;
+            if itemFileName == fileName:
+                return item
+
+    target.addMedia = types.MethodType(addMedia, target)
+    target.findItem = types.MethodType(findItem, target)
+
+"""
+Monkey patch for 'presentations' manager
+"""
+def monkeyPatchPresentations(target):
+    def findItem(self, fileName):
+        """
+        Find item given a filename
+        """
+        for count in range(self.list_view.count()):
+            item = self.list_view.item(count)
+            itemFileName = item.data(QtCore.Qt.UserRole)
+            if itemFileName == fileName:
+                return item
+
+    target.findItem = types.MethodType(findItem, target)
+
+"""
+Monkey patch for 'custom' slides manager
+"""
+def monkeyPatchCustom(target):
+    def findItem(self, fileName):
+        """
+        Find item given a id
+        """
+        for count in range(self.list_view.count()):
+            item = self.list_view.item(count)
+            itemFileName = item.data(QtCore.Qt.UserRole)
+            if itemFileName == fileName:
+                return item
+
+    def findItemByDisplayName(self, name):
+        """
+        Find item given a song name (author name)
+        """
+        for count in range(self.list_view.count()):
+            item = self.list_view.item(count)
+            itemName = item.data(QtCore.Qt.DisplayRole)
+            if name == itemName:
+                return item
+
+    target.findItem = types.MethodType(findItem, target)
+    target.findItemByDisplayName = types.MethodType(findItemByDisplayName, target)
+
+"""
+Monkey patch for 'media' manager
+"""
+def monkeyPatchMedia(target):
+    def findItem(self, fileName):
+        """
+        Find item given a filename
+        """
+        for count in range(self.list_view.count()):
+            item = self.list_view.item(count)
+            itemFileName = item.data(QtCore.Qt.UserRole)
+            if itemFileName == fileName:
+                return item
+
+    target.findItem = types.MethodType(findItem, target)
+
+"""
+Monkey patch for 'songs' manager
+"""
+def monkeyPatchSongs(target):
+    def findItem(self, id):
+        """
+        Find item given a filename
+        """
+        for count in range(self.list_view.count()):
+            item = self.list_view.item(count)
+            itemName = item.data(QtCore.Qt.DisplayRole)
+            itemId = item.data(QtCore.Qt.UserRole)
+            if id == itemId:
+                return item
+
+    def findItemByDisplayName(self, name):
+        """
+        Find item given a song name (author name)
+        """
+        for count in range(self.list_view.count()):
+            item = self.list_view.item(count)
+            itemName = item.data(QtCore.Qt.DisplayRole)
+            if name == itemName:
+                return item
+
+    target.findItem = types.MethodType(findItem, target)
+    target.findItemByDisplayName = types.MethodType(findItemByDisplayName, target)
\ No newline at end of file

=== added file 'openlp/plugins/planningcenter/forms/youtubedownloaddialog.py'
--- openlp/plugins/planningcenter/forms/youtubedownloaddialog.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/forms/youtubedownloaddialog.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,66 @@
+# -*- coding: utf-8 -*-
+# vim: autoindent shiftwidth=4 expandtab textwidth=120 tabstop=4 softtabstop=4
+
+###############################################################################
+# OpenLP - Open Source Lyrics Projection                                      #
+# --------------------------------------------------------------------------- #
+# Copyright (c) 2008-2015 OpenLP Developers                                   #
+# --------------------------------------------------------------------------- #
+# This program is free software; you can redistribute it and/or modify it     #
+# under the terms of the GNU General Public License as published by the Free  #
+# Software Foundation; version 2 of the License.                              #
+#                                                                             #
+# This program is distributed in the hope that it will be useful, but WITHOUT #
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or       #
+# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for    #
+# more details.                                                               #
+#                                                                             #
+# You should have received a copy of the GNU General Public License along     #
+# with this program; if not, write to the Free Software Foundation, Inc., 59  #
+# Temple Place, Suite 330, Boston, MA 02111-1307 USA                          #
+###############################################################################
+
+import os
+
+from PyQt4 import QtGui, QtCore, uic
+
+from openlp.core.common import AppLocation, translate
+from openlp.plugins.planningcenter.lib import Youtube
+
+class YoutubeDownloadDialog(QtGui.QDialog):
+    """
+    Provide UI for downloading youtube media
+    """
+    def __init__(self, plugin):
+        """
+        Initialise the form
+        """
+        QtGui.QDialog.__init__(self, plugin.main_window)
+        self.manager = plugin.manager
+        self.plugin = plugin
+        self.item_id = None
+        self.setupUi()
+
+    def setupUi(self):
+        """
+        Setup the UI
+        """
+        uiPath = os.path.join(AppLocation.get_directory(AppLocation.PluginsDir),
+                                     'planningcenter', 'resources', 'youtubedownloaddialog.ui')
+        uic.loadUi(uiPath, self)
+        self.setWindowIcon(QtGui.QIcon(self.plugin.youtubeIconPath()))
+
+    def exec_(self):
+        """
+        Execute the dialog and return the exit code.
+        """
+        return QtGui.QDialog.exec_(self)
+
+    def accept(self):
+        y = Youtube()
+        y.download(self.urlEdit.text(), self.filenameEdit.text()) 
+        QtGui.QMessageBox.information(self,
+                                          translate('PlanningCenterPlugin.YoutubeDownloadDialog', 'Ok'),
+                                          translate('PlanningCenterPlugin.YoutubeDownloadDialog',
+                                                    'Download complete.'))
+        QtGui.QDialog.accept(self)

=== added directory 'openlp/plugins/planningcenter/lib'
=== added file 'openlp/plugins/planningcenter/lib/__init__.py'
--- openlp/plugins/planningcenter/lib/__init__.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/__init__.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,25 @@
+# -*- coding: utf-8 -*-
+# vim: autoindent shiftwidth=4 expandtab textwidth=120 tabstop=4 softtabstop=4
+
+###############################################################################
+# OpenLP - Open Source Lyrics Projection                                      #
+# --------------------------------------------------------------------------- #
+# Copyright (c) 2008-2015 OpenLP Developers                                   #
+# --------------------------------------------------------------------------- #
+# This program is free software; you can redistribute it and/or modify it     #
+# under the terms of the GNU General Public License as published by the Free  #
+# Software Foundation; version 2 of the License.                              #
+#                                                                             #
+# This program is distributed in the hope that it will be useful, but WITHOUT #
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or       #
+# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for    #
+# more details.                                                               #
+#                                                                             #
+# You should have received a copy of the GNU General Public License along     #
+# with this program; if not, write to the Free Software Foundation, Inc., 59  #
+# Temple Place, Suite 330, Boston, MA 02111-1307 USA                          #
+###############################################################################
+
+from .planningcentertab import PlanningCenterTab
+from .session import Session
+from .youtube import Youtube

=== added file 'openlp/plugins/planningcenter/lib/db.py'
--- openlp/plugins/planningcenter/lib/db.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/db.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,55 @@
+# -*- coding: utf-8 -*-
+# vim: autoindent shiftwidth=4 expandtab textwidth=120 tabstop=4 softtabstop=4
+
+###############################################################################
+# OpenLP - Open Source Lyrics Projection                                      #
+# --------------------------------------------------------------------------- #
+# Copyright (c) 2008-2015 OpenLP Developers                                   #
+# --------------------------------------------------------------------------- #
+# This program is free software; you can redistribute it and/or modify it     #
+# under the terms of the GNU General Public License as published by the Free  #
+# Software Foundation; version 2 of the License.                              #
+#                                                                             #
+# This program is distributed in the hope that it will be useful, but WITHOUT #
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or       #
+# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for    #
+# more details.                                                               #
+#                                                                             #
+# You should have received a copy of the GNU General Public License along     #
+# with this program; if not, write to the Free Software Foundation, Inc., 59  #
+# Temple Place, Suite 330, Boston, MA 02111-1307 USA                          #
+###############################################################################
+"""
+The :mod:`db` module provides the database and schema that is the backend for the Alerts plugin.
+"""
+
+from sqlalchemy import Column, Table, types
+from sqlalchemy.orm import mapper
+
+from openlp.core.lib.db import BaseModel, init_db
+
+
+class AlertItem(BaseModel):
+    """
+    AlertItem model
+    """
+    pass
+
+
+def init_schema(url):
+    """
+    Setup the alerts database connection and initialise the database schema
+
+    :param url:
+        The database to setup
+    """
+    session, metadata = init_db(url)
+
+    alerts_table = Table('alerts', metadata,
+                         Column('id', types.Integer(), primary_key=True),
+                         Column('text', types.UnicodeText, nullable=False))
+
+    mapper(AlertItem, alerts_table)
+
+    metadata.create_all(checkfirst=True)
+    return session

=== added file 'openlp/plugins/planningcenter/lib/planningcentertab.py'
--- openlp/plugins/planningcenter/lib/planningcentertab.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/planningcentertab.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,95 @@
+# -*- coding: utf-8 -*-
+# vim: autoindent shiftwidth=4 expandtab textwidth=120 tabstop=4 softtabstop=4
+
+###############################################################################
+# OpenLP - Open Source Lyrics Projection                                      #
+# --------------------------------------------------------------------------- #
+# Copyright (c) 2008-2015 OpenLP Developers                                   #
+# --------------------------------------------------------------------------- #
+# This program is free software; you can redistribute it and/or modify it     #
+# under the terms of the GNU General Public License as published by the Free  #
+# Software Foundation; version 2 of the License.                              #
+#                                                                             #
+# This program is distributed in the hope that it will be useful, but WITHOUT #
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or       #
+# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for    #
+# more details.                                                               #
+#                                                                             #
+# You should have received a copy of the GNU General Public License along     #
+# with this program; if not, write to the Free Software Foundation, Inc., 59  #
+# Temple Place, Suite 330, Boston, MA 02111-1307 USA                          #
+###############################################################################
+
+from PyQt4 import QtGui
+
+from openlp.core.common import Settings, UiStrings, translate
+from openlp.core.lib import ColorButton, SettingsTab
+from openlp.core.lib.ui import create_valign_selection_widgets
+
+
+class PlanningCenterTab(SettingsTab):
+    """
+    PlanningCenterTab is the settings tab in the settings dialog.
+    """
+    def __init__(self, parent, name, visible_title, icon_path):
+        extra_settings = {
+            'planningcenter/media directory': "Media"
+        }
+        Settings.extend_default_settings(extra_settings)
+
+        super(PlanningCenterTab, self).__init__(parent, name, visible_title, icon_path)
+
+    def setupUi(self):
+        self.setObjectName('PlanningCenterTab')
+        super(PlanningCenterTab, self).setupUi()
+
+        self.advanced_group_box = QtGui.QGroupBox(self.left_column)
+        self.advanced_group_box.setObjectName('advanced_group_box')
+        self.advanced_layout = QtGui.QVBoxLayout(self.advanced_group_box)
+        self.advanced_layout.setObjectName('advanced_layout')
+
+        self.media_directory_label = QtGui.QLabel(self.advanced_group_box)
+        self.media_directory_label.setObjectName('media_directory_label')
+        self.advanced_layout.addWidget(self.media_directory_label)
+
+        self.media_directory_edit = QtGui.QLineEdit(self.advanced_group_box)
+        self.media_directory_edit.setObjectName('media_directory_edit')
+        self.advanced_layout.addWidget(self.media_directory_edit)
+        
+        self.left_layout.addWidget(self.advanced_group_box)
+        self.left_layout.addStretch()
+        self.right_layout.addStretch()
+
+        # Signals and slots
+        #self.media_directory_edit.valueChanged.connect(self.on_media_directory_edit_changed)
+
+    def retranslateUi(self):
+        self.advanced_group_box.setTitle(UiStrings().Advanced)
+        self.media_directory_label.setText(translate('MediaPlugin.MediaTab', 'Media directory'))
+       
+    #def on_media_directory_edit_changed(self, color):
+    #    """
+    #    The background color has been changed.
+    #    """
+    #    self.changed = True
+
+    def load(self):
+        """
+        Load the settings into the UI.
+        """
+        settings = Settings()
+        settings.beginGroup(self.settings_section)
+        self.media_directory = settings.value('media directory')
+        settings.endGroup()
+        self.media_directory_edit.setText(self.media_directory)
+        self.changed = False
+        
+    def save(self):
+        """
+        Save the changes on exit of the Settings dialog.
+        """
+        settings = Settings()
+        settings.beginGroup(self.settings_section)
+        settings.setValue('media directory', self.media_directory)
+        settings.endGroup()
+        self.changed = False

=== added file 'openlp/plugins/planningcenter/lib/session.py'
--- openlp/plugins/planningcenter/lib/session.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/session.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,119 @@
+import logging
+import requests
+import sys
+import json
+
+# Start a session so we can have persistant cookies
+g_session = requests.session()
+g_loggedIn = False
+g_loginData = None
+
+class Session():
+    """
+    Handle login and session data as website
+    http://stackoverflow.com/questions/8316818/login-to-website-using-python
+
+    And provide access to JSON data for planning centre online 
+    http://get.planningcenteronline.com/api
+    """
+    def __init__(self):
+        return
+
+    def isLoggedIn(self):
+        global g_session
+        return g_loggedIn
+
+    def login(self, email, password):
+        global g_loginData
+		
+        # This is the form data that the page sends when logging in
+        g_loginData = {
+	        'email': email,
+	        'password': password,
+	        'submit': 'login',
+        }
+
+        return self.relogin()
+
+    def relogin(self):
+        # Attempt relogin
+
+        global g_session
+        global g_loggedIn
+
+        g_loggedIn = False
+
+        try:
+            r = g_session.post('https://accounts.planningcenteronline.com/login', data = g_loginData)
+        except:
+            log.info(sys.exc_info()[0])
+
+        # Login failed - really we should test this after each URL fetch and auto-relogin?
+        if "<title>Login - Accounts</title>" not in r.text:
+            g_loggedIn = True
+
+        return g_loggedIn
+
+
+    def organisation(self):
+        """
+        Contains organisation data - service types
+        """
+        return json.loads(self.url('https://services.planningcenteronline.com/organization.json').text)
+
+    def serviceTypePlans(self, serviceTypeId):
+        """
+        Contains all plans for a certain service type
+        """
+        return json.loads(self.url('https://planningcenteronline.com/service_types/' + str(serviceTypeId) + '/plans.json').text)
+    
+    def plan(self, planId):
+        """
+        Plan data
+        """
+        return json.loads(self.url('https://planningcenteronline.com/plans/' + str(planId) + '.json?include_slides=true').text)
+
+    def media(self, mediaId):
+        """
+        Media data
+        """
+        return json.loads(self.url('https://services.planningcenteronline.com/medias/' + str(mediaId) + '.json').text)
+    
+    def arrangement(self, arrangementId):
+        """
+        Arrangement data
+        """
+        return json.loads(self.url('https://planningcenteronline.com/arrangements/' + str(arrangementId) + '.json').text)
+
+    def urlStream(self, u):
+        global g_session
+        r = g_session.get(u, stream = True)
+        return r
+
+    def url(self, u):
+        global g_session
+        try:
+            r = g_session.get(u)
+        except:
+            # attempt a relogin if we havent used the session for a while
+            self.relogin()
+            r = g_session.get(u)
+
+        return r
+		
+# module testing
+if __name__ == "__main__":
+    s = Session()
+    email = "xxx"
+    password = "xxx"
+    print("Logging in to planningcentreonline.com")
+  	
+    f = open("login_result.html", "w")
+    f.write(s.login(email, password))
+    f.close()
+	
+    f = open("plan_result.html", "w")
+    f.write(s.plan("16910481"))
+    f.close()
+	
+    print("Test complete")

=== added directory 'openlp/plugins/planningcenter/lib/youtube-dl'
=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/LICENSE'
--- openlp/plugins/planningcenter/lib/youtube-dl/LICENSE	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/LICENSE	2015-09-23 12:21:33 +0000
@@ -0,0 +1,24 @@
+This is free and unencumbered software released into the public domain.
+
+Anyone is free to copy, modify, publish, use, compile, sell, or
+distribute this software, either in source code form or as a compiled
+binary, for any purpose, commercial or non-commercial, and by any
+means.
+
+In jurisdictions that recognize copyright laws, the author or authors
+of this software dedicate any and all copyright interest in the
+software to the public domain. We make this dedication for the benefit
+of the public at large and to the detriment of our heirs and
+successors. We intend this dedication to be an overt act of
+relinquishment in perpetuity of all present and future rights to this
+software under copyright law.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+OTHER DEALINGS IN THE SOFTWARE.
+
+For more information, please refer to <http://unlicense.org/>

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/README.txt'
--- openlp/plugins/planningcenter/lib/youtube-dl/README.txt	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/README.txt	2015-09-23 12:21:33 +0000
@@ -0,0 +1,1035 @@
+youtube-dl - download videos from youtube.com or other video platforms
+
+-   INSTALLATION
+-   DESCRIPTION
+-   OPTIONS
+-   CONFIGURATION
+-   OUTPUT TEMPLATE
+-   FORMAT SELECTION
+-   VIDEO SELECTION
+-   FAQ
+-   DEVELOPER INSTRUCTIONS
+-   BUGS
+-   COPYRIGHT
+
+
+
+INSTALLATION
+
+
+To install it right away for all UNIX users (Linux, OS X, etc.), type:
+
+    sudo curl https://yt-dl.org/latest/youtube-dl -o /usr/local/bin/youtube-dl
+    sudo chmod a+rx /usr/local/bin/youtube-dl
+
+If you do not have curl, you can alternatively use a recent wget:
+
+    sudo wget https://yt-dl.org/downloads/latest/youtube-dl -O /usr/local/bin/youtube-dl
+    sudo chmod a+rx /usr/local/bin/youtube-dl
+
+Windows users can download a .exe file and place it in their home
+directory or any other location on their PATH.
+
+OS X users can install YOUTUBE-DL with Homebrew.
+
+    brew install youtube-dl
+
+You can also use pip:
+
+    sudo pip install youtube-dl
+
+Alternatively, refer to the developer instructions below for how to
+check out and work with the git repository. For further options,
+including PGP signatures, see
+https://rg3.github.io/youtube-dl/download.html .
+
+
+
+DESCRIPTION
+
+
+YOUTUBE-DL is a small command-line program to download videos from
+YouTube.com and a few more sites. It requires the Python interpreter,
+version 2.6, 2.7, or 3.2+, and it is not platform specific. It should
+work on your Unix box, on Windows or on Mac OS X. It is released to the
+public domain, which means you can modify it, redistribute it or use it
+however you like.
+
+    youtube-dl [OPTIONS] URL [URL...]
+
+
+
+OPTIONS
+
+
+    -h, --help                       Print this help text and exit
+    --version                        Print program version and exit
+    -U, --update                     Update this program to latest version. Make sure that you have sufficient permissions (run with sudo if needed)
+    -i, --ignore-errors              Continue on download errors, for example to skip unavailable videos in a playlist
+    --abort-on-error                 Abort downloading of further videos (in the playlist or the command line) if an error occurs
+    --dump-user-agent                Display the current browser identification
+    --list-extractors                List all supported extractors
+    --extractor-descriptions         Output descriptions of all supported extractors
+    --force-generic-extractor        Force extraction to use the generic extractor
+    --default-search PREFIX          Use this prefix for unqualified URLs. For example "gvsearch2:" downloads two videos from google videos for youtube-dl "large apple".
+                                     Use the value "auto" to let youtube-dl guess ("auto_warning" to emit a warning when guessing). "error" just throws an error. The
+                                     default value "fixup_error" repairs broken URLs, but emits an error if this is not possible instead of searching.
+    --ignore-config                  Do not read configuration files. When given in the global configuration file /etc/youtube-dl.conf: Do not read the user configuration
+                                     in ~/.config/youtube-dl/config (%APPDATA%/youtube-dl/config.txt on Windows)
+    --flat-playlist                  Do not extract the videos of a playlist, only list them.
+    --no-color                       Do not emit color codes in output
+
+
+Network Options:
+
+    --proxy URL                      Use the specified HTTP/HTTPS proxy. Pass in an empty string (--proxy "") for direct connection
+    --socket-timeout SECONDS         Time to wait before giving up, in seconds
+    --source-address IP              Client-side IP address to bind to (experimental)
+    -4, --force-ipv4                 Make all connections via IPv4 (experimental)
+    -6, --force-ipv6                 Make all connections via IPv6 (experimental)
+    --cn-verification-proxy URL      Use this proxy to verify the IP address for some Chinese sites. The default proxy specified by --proxy (or none, if the options is
+                                     not present) is used for the actual downloading. (experimental)
+
+
+Video Selection:
+
+    --playlist-start NUMBER          Playlist video to start at (default is 1)
+    --playlist-end NUMBER            Playlist video to end at (default is last)
+    --playlist-items ITEM_SPEC       Playlist video items to download. Specify indices of the videos in the playlist seperated by commas like: "--playlist-items 1,2,5,8"
+                                     if you want to download videos indexed 1, 2, 5, 8 in the playlist. You can specify range: "--playlist-items 1-3,7,10-13", it will
+                                     download the videos at index 1, 2, 3, 7, 10, 11, 12 and 13.
+    --match-title REGEX              Download only matching titles (regex or caseless sub-string)
+    --reject-title REGEX             Skip download for matching titles (regex or caseless sub-string)
+    --max-downloads NUMBER           Abort after downloading NUMBER files
+    --min-filesize SIZE              Do not download any videos smaller than SIZE (e.g. 50k or 44.6m)
+    --max-filesize SIZE              Do not download any videos larger than SIZE (e.g. 50k or 44.6m)
+    --date DATE                      Download only videos uploaded in this date
+    --datebefore DATE                Download only videos uploaded on or before this date (i.e. inclusive)
+    --dateafter DATE                 Download only videos uploaded on or after this date (i.e. inclusive)
+    --min-views COUNT                Do not download any videos with less than COUNT views
+    --max-views COUNT                Do not download any videos with more than COUNT views
+    --match-filter FILTER            Generic video filter (experimental). Specify any key (see help for -o for a list of available keys) to match if the key is present,
+                                     !key to check if the key is not present,key > NUMBER (like "comment_count > 12", also works with >=, <, <=, !=, =) to compare against
+                                     a number, and & to require multiple matches. Values which are not known are excluded unless you put a question mark (?) after the
+                                     operator.For example, to only match videos that have been liked more than 100 times and disliked less than 50 times (or the dislike
+                                     functionality is not available at the given service), but who also have a description, use  --match-filter "like_count > 100 &
+                                     dislike_count <? 50 & description" .
+    --no-playlist                    Download only the video, if the URL refers to a video and a playlist.
+    --yes-playlist                   Download the playlist, if the URL refers to a video and a playlist.
+    --age-limit YEARS                Download only videos suitable for the given age
+    --download-archive FILE          Download only videos not listed in the archive file. Record the IDs of all downloaded videos in it.
+    --include-ads                    Download advertisements as well (experimental)
+
+
+Download Options:
+
+    -r, --rate-limit LIMIT           Maximum download rate in bytes per second (e.g. 50K or 4.2M)
+    -R, --retries RETRIES            Number of retries (default is 10), or "infinite".
+    --buffer-size SIZE               Size of download buffer (e.g. 1024 or 16K) (default is 1024)
+    --no-resize-buffer               Do not automatically adjust the buffer size. By default, the buffer size is automatically resized from an initial value of SIZE.
+    --playlist-reverse               Download playlist videos in reverse order
+    --xattr-set-filesize             Set file xattribute ytdl.filesize with expected filesize (experimental)
+    --hls-prefer-native              Use the native HLS downloader instead of ffmpeg (experimental)
+    --external-downloader COMMAND    Use the specified external downloader. Currently supports aria2c,curl,httpie,wget
+    --external-downloader-args ARGS  Give these arguments to the external downloader
+
+
+Filesystem Options:
+
+    -a, --batch-file FILE            File containing URLs to download ('-' for stdin)
+    --id                             Use only video ID in file name
+    -o, --output TEMPLATE            Output filename template. Use %(title)s to get the title, %(uploader)s for the uploader name, %(uploader_id)s for the uploader
+                                     nickname if different, %(autonumber)s to get an automatically incremented number, %(ext)s for the filename extension, %(format)s for
+                                     the format description (like "22 - 1280x720" or "HD"), %(format_id)s for the unique id of the format (like YouTube's itags: "137"),
+                                     %(upload_date)s for the upload date (YYYYMMDD), %(extractor)s for the provider (youtube, metacafe, etc), %(id)s for the video id,
+                                     %(playlist_title)s, %(playlist_id)s, or %(playlist)s (=title if present, ID otherwise) for the playlist the video is in,
+                                     %(playlist_index)s for the position in the playlist. %(height)s and %(width)s for the width and height of the video format.
+                                     %(resolution)s for a textual description of the resolution of the video format. %% for a literal percent. Use - to output to stdout.
+                                     Can also be used to download to a different directory, for example with -o '/my/downloads/%(uploader)s/%(title)s-%(id)s.%(ext)s' .
+    --autonumber-size NUMBER         Specify the number of digits in %(autonumber)s when it is present in output filename template or --auto-number option is given
+    --restrict-filenames             Restrict filenames to only ASCII characters, and avoid "&" and spaces in filenames
+    -A, --auto-number                [deprecated; use  -o "%(autonumber)s-%(title)s.%(ext)s" ] Number downloaded files starting from 00000
+    -t, --title                      [deprecated] Use title in file name (default)
+    -l, --literal                    [deprecated] Alias of --title
+    -w, --no-overwrites              Do not overwrite files
+    -c, --continue                   Force resume of partially downloaded files. By default, youtube-dl will resume downloads if possible.
+    --no-continue                    Do not resume partially downloaded files (restart from beginning)
+    --no-part                        Do not use .part files - write directly into output file
+    --no-mtime                       Do not use the Last-modified header to set the file modification time
+    --write-description              Write video description to a .description file
+    --write-info-json                Write video metadata to a .info.json file
+    --write-annotations              Write video annotations to a .annotations.xml file
+    --load-info FILE                 JSON file containing the video information (created with the "--write-info-json" option)
+    --cookies FILE                   File to read cookies from and dump cookie jar in
+    --cache-dir DIR                  Location in the filesystem where youtube-dl can store some downloaded information permanently. By default $XDG_CACHE_HOME/youtube-dl
+                                     or ~/.cache/youtube-dl . At the moment, only YouTube player files (for videos with obfuscated signatures) are cached, but that may
+                                     change.
+    --no-cache-dir                   Disable filesystem caching
+    --rm-cache-dir                   Delete all filesystem cache files
+
+
+Thumbnail images:
+
+    --write-thumbnail                Write thumbnail image to disk
+    --write-all-thumbnails           Write all thumbnail image formats to disk
+    --list-thumbnails                Simulate and list all available thumbnail formats
+
+
+Verbosity / Simulation Options:
+
+    -q, --quiet                      Activate quiet mode
+    --no-warnings                    Ignore warnings
+    -s, --simulate                   Do not download the video and do not write anything to disk
+    --skip-download                  Do not download the video
+    -g, --get-url                    Simulate, quiet but print URL
+    -e, --get-title                  Simulate, quiet but print title
+    --get-id                         Simulate, quiet but print id
+    --get-thumbnail                  Simulate, quiet but print thumbnail URL
+    --get-description                Simulate, quiet but print video description
+    --get-duration                   Simulate, quiet but print video length
+    --get-filename                   Simulate, quiet but print output filename
+    --get-format                     Simulate, quiet but print output format
+    -j, --dump-json                  Simulate, quiet but print JSON information. See --output for a description of available keys.
+    -J, --dump-single-json           Simulate, quiet but print JSON information for each command-line argument. If the URL refers to a playlist, dump the whole playlist
+                                     information in a single line.
+    --print-json                     Be quiet and print the video information as JSON (video is still being downloaded).
+    --newline                        Output progress bar as new lines
+    --no-progress                    Do not print progress bar
+    --console-title                  Display progress in console titlebar
+    -v, --verbose                    Print various debugging information
+    --dump-pages                     Print downloaded pages encoded using base64 to debug problems (very verbose)
+    --write-pages                    Write downloaded intermediary pages to files in the current directory to debug problems
+    --print-traffic                  Display sent and read HTTP traffic
+    -C, --call-home                  Contact the youtube-dl server for debugging
+    --no-call-home                   Do NOT contact the youtube-dl server for debugging
+
+
+Workarounds:
+
+    --encoding ENCODING              Force the specified encoding (experimental)
+    --no-check-certificate           Suppress HTTPS certificate validation
+    --prefer-insecure                Use an unencrypted connection to retrieve information about the video. (Currently supported only for YouTube)
+    --user-agent UA                  Specify a custom user agent
+    --referer URL                    Specify a custom referer, use if the video access is restricted to one domain
+    --add-header FIELD:VALUE         Specify a custom HTTP header and its value, separated by a colon ':'. You can use this option multiple times
+    --bidi-workaround                Work around terminals that lack bidirectional text support. Requires bidiv or fribidi executable in PATH
+    --sleep-interval SECONDS         Number of seconds to sleep before each download.
+
+
+Video Format Options:
+
+    -f, --format FORMAT              Video format code, see the "FORMAT SELECTION" for all the info
+    --all-formats                    Download all available video formats
+    --prefer-free-formats            Prefer free video formats unless a specific one is requested
+    -F, --list-formats               List all available formats
+    --youtube-skip-dash-manifest     Do not download the DASH manifests and related data on YouTube videos
+    --merge-output-format FORMAT     If a merge is required (e.g. bestvideo+bestaudio), output to given container format. One of mkv, mp4, ogg, webm, flv. Ignored if no
+                                     merge is required
+
+
+Subtitle Options:
+
+    --write-sub                      Write subtitle file
+    --write-auto-sub                 Write automatic subtitle file (YouTube only)
+    --all-subs                       Download all the available subtitles of the video
+    --list-subs                      List all available subtitles for the video
+    --sub-format FORMAT              Subtitle format, accepts formats preference, for example: "srt" or "ass/srt/best"
+    --sub-lang LANGS                 Languages of the subtitles to download (optional) separated by commas, use IETF language tags like 'en,pt'
+
+
+Authentication Options:
+
+    -u, --username USERNAME          Login with this account ID
+    -p, --password PASSWORD          Account password. If this option is left out, youtube-dl will ask interactively.
+    -2, --twofactor TWOFACTOR        Two-factor auth code
+    -n, --netrc                      Use .netrc authentication data
+    --video-password PASSWORD        Video password (vimeo, smotri)
+
+
+Post-processing Options:
+
+    -x, --extract-audio              Convert video files to audio-only files (requires ffmpeg or avconv and ffprobe or avprobe)
+    --audio-format FORMAT            Specify audio format: "best", "aac", "vorbis", "mp3", "m4a", "opus", or "wav"; "best" by default
+    --audio-quality QUALITY          Specify ffmpeg/avconv audio quality, insert a value between 0 (better) and 9 (worse) for VBR or a specific bitrate like 128K (default
+                                     5)
+    --recode-video FORMAT            Encode the video to another format if necessary (currently supported: mp4|flv|ogg|webm|mkv|avi)
+    --postprocessor-args ARGS        Give these arguments to the postprocessor
+    -k, --keep-video                 Keep the video file on disk after the post-processing; the video is erased by default
+    --no-post-overwrites             Do not overwrite post-processed files; the post-processed files are overwritten by default
+    --embed-subs                     Embed subtitles in the video (only for mkv and mp4 videos)
+    --embed-thumbnail                Embed thumbnail in the audio as cover art
+    --add-metadata                   Write metadata to the video file
+    --metadata-from-title FORMAT     Parse additional metadata like song title / artist from the video title. The format syntax is the same as --output, the parsed
+                                     parameters replace existing values. Additional templates: %(album)s, %(artist)s. Example: --metadata-from-title "%(artist)s -
+                                     %(title)s" matches a title like "Coldplay - Paradise"
+    --xattrs                         Write metadata to the video file's xattrs (using dublin core and xdg standards)
+    --fixup POLICY                   Automatically correct known faults of the file. One of never (do nothing), warn (only emit a warning), detect_or_warn (the default;
+                                     fix file if we can, warn otherwise)
+    --prefer-avconv                  Prefer avconv over ffmpeg for running the postprocessors (default)
+    --prefer-ffmpeg                  Prefer ffmpeg over avconv for running the postprocessors
+    --ffmpeg-location PATH           Location of the ffmpeg/avconv binary; either the path to the binary or its containing directory.
+    --exec CMD                       Execute a command on the file after downloading, similar to find's -exec syntax. Example: --exec 'adb push {} /sdcard/Music/ && rm
+                                     {}'
+    --convert-subtitles FORMAT       Convert the subtitles to other format (currently supported: srt|ass|vtt)
+
+
+
+CONFIGURATION
+
+
+You can configure youtube-dl by placing default arguments (such as
+--extract-audio --no-mtime to always extract the audio and not copy the
+mtime) into /etc/youtube-dl.conf and/or ~/.config/youtube-dl/config. On
+Windows, the configuration file locations are
+%APPDATA%\youtube-dl\config.txt and
+C:\Users\<user name>\youtube-dl.conf.
+
+Authentication with .netrc file
+
+You may also want to configure automatic credentials storage for
+extractors that support authentication (by providing login and password
+with --username and --password) in order not to pass credentials as
+command line arguments on every youtube-dl execution and prevent
+tracking plain text passwords in shell command history. You can achieve
+this using .netrc file on per extractor basis. For that you will need to
+create .netrc file in your $HOME and restrict permissions to read/write
+by you only:
+
+    touch $HOME/.netrc
+    chmod a-rwx,u+rw $HOME/.netrc
+
+After that you can add credentials for extractor in the following
+format, where _extractor_ is the name of extractor in lowercase:
+
+    machine <extractor> login <login> password <password>
+
+For example:
+
+    machine youtube login myaccount@xxxxxxxxx password my_youtube_password
+    machine twitch login my_twitch_account_name password my_twitch_password
+
+To activate authentication with .netrc file you should pass --netrc to
+youtube-dl or to place it in configuration file.
+
+On Windows you may also need to setup %HOME% environment variable
+manually.
+
+
+
+OUTPUT TEMPLATE
+
+
+The -o option allows users to indicate a template for the output file
+names. The basic usage is not to set any template arguments when
+downloading a single file, like in
+youtube-dl -o funny_video.flv "http://some/video";. However, it may
+contain special sequences that will be replaced when downloading each
+video. The special sequences have the format %(NAME)s. To clarify, that
+is a percent symbol followed by a name in parenthesis, followed by a
+lowercase S. Allowed names are:
+
+-   id: The sequence will be replaced by the video identifier.
+-   url: The sequence will be replaced by the video URL.
+-   uploader: The sequence will be replaced by the nickname of the
+    person who uploaded the video.
+-   upload_date: The sequence will be replaced by the upload date in
+    YYYYMMDD format.
+-   title: The sequence will be replaced by the video title.
+-   ext: The sequence will be replaced by the appropriate extension
+    (like flv or mp4).
+-   epoch: The sequence will be replaced by the Unix epoch when creating
+    the file.
+-   autonumber: The sequence will be replaced by a five-digit number
+    that will be increased with each download, starting at zero.
+-   playlist: The name or the id of the playlist that contains the
+    video.
+-   playlist_index: The index of the video in the playlist, a five-digit
+    number.
+
+The current default template is %(title)s-%(id)s.%(ext)s.
+
+In some cases, you don't want special characters such as 中, spaces, or
+&, such as when transferring the downloaded filename to a Windows system
+or the filename through an 8bit-unsafe channel. In these cases, add the
+--restrict-filenames flag to get a shorter title:
+
+``` {.bash}
+$ youtube-dl --get-filename -o "%(title)s.%(ext)s" BaW_jenozKc
+youtube-dl test video ''_ä↭𝕐.mp4    # All kinds of weird characters
+$ youtube-dl --get-filename -o "%(title)s.%(ext)s" BaW_jenozKc --restrict-filenames
+youtube-dl_test_video_.mp4          # A simple file name
+```
+
+
+
+FORMAT SELECTION
+
+
+By default youtube-dl tries to download the best quality, but sometimes
+you may want to download other format. The simplest case is requesting a
+specific format, for example -f 22. You can get the list of available
+formats using --list-formats, you can also use a file extension
+(currently it supports aac, m4a, mp3, mp4, ogg, wav, webm) or the
+special names best, bestvideo, bestaudio and worst.
+
+If you want to download multiple videos and they don't have the same
+formats available, you can specify the order of preference using
+slashes, as in -f 22/17/18. You can also filter the video results by
+putting a condition in brackets, as in -f "best[height=720]" (or
+-f "[filesize>10M]"). This works for filesize, height, width, tbr, abr,
+vbr, asr, and fps and the comparisons <, <=, >, >=, =, != and for ext,
+acodec, vcodec, container, and protocol and the comparisons =, != .
+Formats for which the value is not known are excluded unless you put a
+question mark (?) after the operator. You can combine format filters, so
+-f "[height <=? 720][tbr>500]" selects up to 720p videos (or videos
+where the height is not known) with a bitrate of at least 500 KBit/s.
+Use commas to download multiple formats, such as
+-f 136/137/mp4/bestvideo,140/m4a/bestaudio. You can merge the video and
+audio of two formats into a single file using
+-f <video-format>+<audio-format> (requires ffmpeg or avconv), for
+example -f bestvideo+bestaudio.
+
+Since the end of April 2015 and version 2015.04.26 youtube-dl uses
+-f bestvideo+bestaudio/best as default format selection (see #5447,
+#5456). If ffmpeg or avconv are installed this results in downloading
+bestvideo and bestaudio separately and muxing them together into a
+single file giving the best overall quality available. Otherwise it
+falls back to best and results in downloading best available quality
+served as a single file. best is also needed for videos that don't come
+from YouTube because they don't provide the audio and video in two
+different files. If you want to only download some dash formats (for
+example if you are not interested in getting videos with a resolution
+higher than 1080p), you can add
+-f bestvideo[height<=?1080]+bestaudio/best to your configuration file.
+Note that if you use youtube-dl to stream to stdout (and most likely to
+pipe it to your media player then), i.e. you explicitly specify output
+template as -o -, youtube-dl still uses -f best format selection in
+order to start content delivery immediately to your player and not to
+wait until bestvideo and bestaudio are downloaded and muxed.
+
+If you want to preserve the old format selection behavior (prior to
+youtube-dl 2015.04.26), i.e. you want to download best available quality
+media served as a single file, you should explicitly specify your choice
+with -f best. You may want to add it to the configuration file in order
+not to type it every time you run youtube-dl.
+
+
+
+VIDEO SELECTION
+
+
+Videos can be filtered by their upload date using the options --date,
+--datebefore or --dateafter, they accept dates in two formats:
+
+-   Absolute dates: Dates in the format YYYYMMDD.
+-   Relative dates: Dates in the format
+    (now|today)[+-][0-9](day|week|month|year)(s)?
+
+Examples:
+
+``` {.bash}
+# Download only the videos uploaded in the last 6 months
+$ youtube-dl --dateafter now-6months
+
+# Download only the videos uploaded on January 1, 1970
+$ youtube-dl --date 19700101
+
+$ # will only download the videos uploaded in the 200x decade
+$ youtube-dl --dateafter 20000101 --datebefore 20091231
+```
+
+
+
+FAQ
+
+
+How do I update youtube-dl?
+
+If you've followed our manual installation instructions, you can simply
+run youtube-dl -U (or, on Linux, sudo youtube-dl -U).
+
+If you have used pip, a simple sudo pip install -U youtube-dl is
+sufficient to update.
+
+If you have installed youtube-dl using a package manager like _apt-get_
+or _yum_, use the standard system update mechanism to update. Note that
+distribution packages are often outdated. As a rule of thumb, youtube-dl
+releases at least once a month, and often weekly or even daily. Simply
+go to http://yt-dl.org/ to find out the current version. Unfortunately,
+there is nothing we youtube-dl developers can do if your distributions
+serves a really outdated version. You can (and should) complain to your
+distribution in their bugtracker or support forum.
+
+As a last resort, you can also uninstall the version installed by your
+package manager and follow our manual installation instructions. For
+that, remove the distribution's package, with a line like
+
+    sudo apt-get remove -y youtube-dl
+
+Afterwards, simply follow our manual installation instructions:
+
+    sudo wget https://yt-dl.org/latest/youtube-dl -O /usr/local/bin/youtube-dl
+    sudo chmod a+x /usr/local/bin/youtube-dl
+    hash -r
+
+Again, from then on you'll be able to update with sudo youtube-dl -U.
+
+I'm getting an error Unable to extract OpenGraph title on YouTube playlists
+
+YouTube changed their playlist format in March 2014 and later on, so
+you'll need at least youtube-dl 2014.07.25 to download all YouTube
+videos.
+
+If you have installed youtube-dl with a package manager, pip, setup.py
+or a tarball, please use that to update. Note that Ubuntu packages do
+not seem to get updated anymore. Since we are not affiliated with
+Ubuntu, there is little we can do. Feel free to report bugs to the
+Ubuntu packaging guys - all they have to do is update the package to a
+somewhat recent version. See above for a way to update.
+
+Do I always have to pass -citw?
+
+By default, youtube-dl intends to have the best options (incidentally,
+if you have a convincing case that these should be different, please
+file an issue where you explain that). Therefore, it is unnecessary and
+sometimes harmful to copy long option strings from webpages. In
+particular, the only option out of -citw that is regularly useful is -i.
+
+Can you please put the -b option back?
+
+Most people asking this question are not aware that youtube-dl now
+defaults to downloading the highest available quality as reported by
+YouTube, which will be 1080p or 720p in some cases, so you no longer
+need the -b option. For some specific videos, maybe YouTube does not
+report them to be available in a specific high quality format you're
+interested in. In that case, simply request it with the -f option and
+youtube-dl will try to download it.
+
+I get HTTP error 402 when trying to download a video. What's this?
+
+Apparently YouTube requires you to pass a CAPTCHA test if you download
+too much. We're considering to provide a way to let you solve the
+CAPTCHA, but at the moment, your best course of action is pointing a
+webbrowser to the youtube URL, solving the CAPTCHA, and restart
+youtube-dl.
+
+I have downloaded a video but how can I play it?
+
+Once the video is fully downloaded, use any video player, such as vlc or
+mplayer.
+
+I extracted a video URL with -g, but it does not play on another machine / in my webbrowser.
+
+It depends a lot on the service. In many cases, requests for the video
+(to download/play it) must come from the same IP address and with the
+same cookies. Use the --cookies option to write the required cookies
+into a file, and advise your downloader to read cookies from that file.
+Some sites also require a common user agent to be used, use
+--dump-user-agent to see the one in use by youtube-dl.
+
+It may be beneficial to use IPv6; in some cases, the restrictions are
+only applied to IPv4. Some services (sometimes only for a subset of
+videos) do not restrict the video URL by IP address, cookie, or
+user-agent, but these are the exception rather than the rule.
+
+Please bear in mind that some URL protocols are NOT supported by
+browsers out of the box, including RTMP. If you are using -g, your own
+downloader must support these as well.
+
+If you want to play the video on a machine that is not running
+youtube-dl, you can relay the video content from the machine that runs
+youtube-dl. You can use -o - to let youtube-dl stream a video to stdout,
+or simply allow the player to download the files written by youtube-dl
+in turn.
+
+ERROR: no fmt_url_map or conn information found in video info
+
+YouTube has switched to a new video info format in July 2011 which is
+not supported by old versions of youtube-dl. See above for how to update
+youtube-dl.
+
+ERROR: unable to download video
+
+YouTube requires an additional signature since September 2012 which is
+not supported by old versions of youtube-dl. See above for how to update
+youtube-dl.
+
+Video URL contains an ampersand and I'm getting some strange output [1] 2839 or 'v' is not recognized as an internal or external command
+
+That's actually the output from your shell. Since ampersand is one of
+the special shell characters it's interpreted by shell preventing you
+from passing the whole URL to youtube-dl. To disable your shell from
+interpreting the ampersands (or any other special characters) you have
+to either put the whole URL in quotes or escape them with a backslash
+(which approach will work depends on your shell).
+
+For example if your URL is
+https://www.youtube.com/watch?t=4&v=BaW_jenozKc you should end up with
+following command:
+
+youtube-dl 'https://www.youtube.com/watch?t=4&v=BaW_jenozKc'
+
+or
+
+youtube-dl https://www.youtube.com/watch?t=4\&v=BaW_jenozKc
+
+For Windows you have to use the double quotes:
+
+youtube-dl "https://www.youtube.com/watch?t=4&v=BaW_jenozKc";
+
+ExtractorError: Could not find JS function u'OF'
+
+In February 2015, the new YouTube player contained a character sequence
+in a string that was misinterpreted by old versions of youtube-dl. See
+above for how to update youtube-dl.
+
+HTTP Error 429: Too Many Requests or 402: Payment Required
+
+These two error codes indicate that the service is blocking your IP
+address because of overuse. Contact the service and ask them to unblock
+your IP address, or - if you have acquired a whitelisted IP address
+already - use the --proxy or --source-address options to select another
+IP address.
+
+SyntaxError: Non-ASCII character
+
+The error
+
+    File "youtube-dl", line 2
+    SyntaxError: Non-ASCII character '\x93' ...
+
+means you're using an outdated version of Python. Please update to
+Python 2.6 or 2.7.
+
+What is this binary file? Where has the code gone?
+
+Since June 2012 (#342) youtube-dl is packed as an executable zipfile,
+simply unzip it (might need renaming to youtube-dl.zip first on some
+systems) or clone the git repository, as laid out above. If you modify
+the code, you can run it by executing the __main__.py file. To recompile
+the executable, run make youtube-dl.
+
+The exe throws a _Runtime error from Visual C++_
+
+To run the exe you need to install first the Microsoft Visual C++ 2008
+Redistributable Package.
+
+On Windows, how should I set up ffmpeg and youtube-dl? Where should I put the exe files?
+
+If you put youtube-dl and ffmpeg in the same directory that you're
+running the command from, it will work, but that's rather cumbersome.
+
+To make a different directory work - either for ffmpeg, or for
+youtube-dl, or for both - simply create the directory (say, C:\bin, or
+C:\Users\<User name>\bin), put all the executables directly in there,
+and then set your PATH environment variable to include that directory.
+
+From then on, after restarting your shell, you will be able to access
+both youtube-dl and ffmpeg (and youtube-dl will be able to find ffmpeg)
+by simply typing youtube-dl or ffmpeg, no matter what directory you're
+in.
+
+How do I put downloads into a specific folder?
+
+Use the -o to specify an output template, for example
+-o "/home/user/videos/%(title)s-%(id)s.%(ext)s". If you want this for
+all of your downloads, put the option into your configuration file.
+
+How do I download a video starting with a - ?
+
+Either prepend http://www.youtube.com/watch?v= or separate the ID from
+the options with --:
+
+    youtube-dl -- -wNyEUrxzFU
+    youtube-dl "http://www.youtube.com/watch?v=-wNyEUrxzFU";
+
+Can you add support for this anime video site, or site which shows current movies for free?
+
+As a matter of policy (as well as legality), youtube-dl does not include
+support for services that specialize in infringing copyright. As a rule
+of thumb, if you cannot easily find a video that the service is quite
+obviously allowed to distribute (i.e. that has been uploaded by the
+creator, the creator's distributor, or is published under a free
+license), the service is probably unfit for inclusion to youtube-dl.
+
+A note on the service that they don't host the infringing content, but
+just link to those who do, is evidence that the service should NOT be
+included into youtube-dl. The same goes for any DMCA note when the whole
+front page of the service is filled with videos they are not allowed to
+distribute. A "fair use" note is equally unconvincing if the service
+shows copyright-protected videos in full without authorization.
+
+Support requests for services that DO purchase the rights to distribute
+their content are perfectly fine though. If in doubt, you can simply
+include a source that mentions the legitimate purchase of content.
+
+How can I speed up work on my issue?
+
+(Also known as: Help, my important issue not being solved!) The
+youtube-dl core developer team is quite small. While we do our best to
+solve as many issues as possible, sometimes that can take quite a while.
+To speed up your issue, here's what you can do:
+
+First of all, please do report the issue at our issue tracker. That
+allows us to coordinate all efforts by users and developers, and serves
+as a unified point. Unfortunately, the youtube-dl project has grown too
+large to use personal email as an effective communication channel.
+
+Please read the bug reporting instructions below. A lot of bugs lack all
+the necessary information. If you can, offer proxy, VPN, or shell access
+to the youtube-dl developers. If you are able to, test the issue from
+multiple computers in multiple countries to exclude local censorship or
+misconfiguration issues.
+
+If nobody is interested in solving your issue, you are welcome to take
+matters into your own hands and submit a pull request (or coerce/pay
+somebody else to do so).
+
+Feel free to bump the issue from time to time by writing a small comment
+("Issue is still present in youtube-dl version ...from France, but fixed
+from Belgium"), but please not more than once a month. Please do not
+declare your issue as important or urgent.
+
+How can I detect whether a given URL is supported by youtube-dl?
+
+For one, have a look at the list of supported sites. Note that it can
+sometimes happen that the site changes its URL scheme (say, from
+http://example.com/video/1234567 to http://example.com/v/1234567 ) and
+youtube-dl reports an URL of a service in that list as unsupported. In
+that case, simply report a bug.
+
+It is _not_ possible to detect whether a URL is supported or not. That's
+because youtube-dl contains a generic extractor which matches ALL URLs.
+You may be tempted to disable, exclude, or remove the generic extractor,
+but the generic extractor not only allows users to extract videos from
+lots of websites that embed a video from another service, but may also
+be used to extract video from a service that it's hosting itself.
+Therefore, we neither recommend nor support disabling, excluding, or
+removing the generic extractor.
+
+If you want to find out whether a given URL is supported, simply call
+youtube-dl with it. If you get no videos back, chances are the URL is
+either not referring to a video or unsupported. You can find out which
+by examining the output (if you run youtube-dl on the console) or
+catching an UnsupportedError exception if you run it from a Python
+program.
+
+
+
+DEVELOPER INSTRUCTIONS
+
+
+Most users do not need to build youtube-dl and can download the builds
+or get them from their distribution.
+
+To run youtube-dl as a developer, you don't need to build anything
+either. Simply execute
+
+    python -m youtube_dl
+
+To run the test, simply invoke your favorite test runner, or execute a
+test file directly; any of the following work:
+
+    python -m unittest discover
+    python test/test_download.py
+    nosetests
+
+If you want to create a build of youtube-dl yourself, you'll need
+
+-   python
+-   make
+-   pandoc
+-   zip
+-   nosetests
+
+Adding support for a new site
+
+If you want to add support for a new site, you can follow this quick
+list (assuming your service is called yourextractor):
+
+1.  Fork this repository
+2.  Check out the source code with
+    git clone git@xxxxxxxxxx:YOUR_GITHUB_USERNAME/youtube-dl.git
+3.  Start a new git branch with
+    cd youtube-dl; git checkout -b yourextractor
+4.  Start with this simple template and save it to
+    youtube_dl/extractor/yourextractor.py:
+
+    ``` {.python}
+    # coding: utf-8
+    from __future__ import unicode_literals
+
+    from .common import InfoExtractor
+
+
+    class YourExtractorIE(InfoExtractor):
+        _VALID_URL = r'https?://(?:www\.)?yourextractor\.com/watch/(?P<id>[0-9]+)'
+        _TEST = {
+            'url': 'http://yourextractor.com/watch/42',
+            'md5': 'TODO: md5 sum of the first 10241 bytes of the video file (use --test)',
+            'info_dict': {
+                'id': '42',
+                'ext': 'mp4',
+                'title': 'Video title goes here',
+                'thumbnail': 're:^https?://.*\.jpg$',
+                # TODO more properties, either as:
+                # * A value
+                # * MD5 checksum; start the string with md5:
+                # * A regular expression; start the string with re:
+                # * Any Python type (for example int or float)
+            }
+        }
+
+        def _real_extract(self, url):
+            video_id = self._match_id(url)
+            webpage = self._download_webpage(url, video_id)
+
+            # TODO more code goes here, for example ...
+            title = self._html_search_regex(r'<h1>(.*?)</h1>', webpage, 'title')
+
+            return {
+                'id': video_id,
+                'title': title,
+                'description': self._og_search_description(webpage),
+                # TODO more properties (see youtube_dl/extractor/common.py)
+            }
+    ```
+
+5.  Add an import in youtube_dl/extractor/__init__.py.
+6.  Run python test/test_download.py TestDownload.test_YourExtractor.
+    This _should fail_ at first, but you can continually re-run it until
+    you're done. If you decide to add more than one test, then rename
+    _TEST to _TESTS and make it into a list of dictionaries. The tests
+    will be then be named TestDownload.test_YourExtractor,
+    TestDownload.test_YourExtractor_1,
+    TestDownload.test_YourExtractor_2, etc.
+7.  Have a look at youtube_dl/common/extractor/common.py for possible
+    helper methods and a detailed description of what your extractor
+    should return. Add tests and code for as many as you want.
+8.  If you can, check the code with flake8.
+9.  When the tests pass, add the new files and commit them and push the
+    result, like this:
+
+        $ git add youtube_dl/extractor/__init__.py
+        $ git add youtube_dl/extractor/yourextractor.py
+        $ git commit -m '[yourextractor] Add new extractor'
+        $ git push origin yourextractor
+
+10. Finally, create a pull request. We'll then review and merge it.
+
+In any case, thank you very much for your contributions!
+
+
+
+EMBEDDING YOUTUBE-DL
+
+
+youtube-dl makes the best effort to be a good command-line program, and
+thus should be callable from any programming language. If you encounter
+any problems parsing its output, feel free to create a report.
+
+From a Python program, you can embed youtube-dl in a more powerful
+fashion, like this:
+
+``` {.python}
+from __future__ import unicode_literals
+import youtube_dl
+
+ydl_opts = {}
+with youtube_dl.YoutubeDL(ydl_opts) as ydl:
+    ydl.download(['http://www.youtube.com/watch?v=BaW_jenozKc'])
+```
+
+Most likely, you'll want to use various options. For a list of what can
+be done, have a look at youtube_dl/YoutubeDL.py. For a start, if you
+want to intercept youtube-dl's output, set a logger object.
+
+Here's a more complete example of a program that outputs only errors
+(and a short message after the download is finished), and
+downloads/converts the video to an mp3 file:
+
+``` {.python}
+from __future__ import unicode_literals
+import youtube_dl
+
+
+class MyLogger(object):
+    def debug(self, msg):
+        pass
+
+    def warning(self, msg):
+        pass
+
+    def error(self, msg):
+        print(msg)
+
+
+def my_hook(d):
+    if d['status'] == 'finished':
+        print('Done downloading, now converting ...')
+
+
+ydl_opts = {
+    'format': 'bestaudio/best',
+    'postprocessors': [{
+        'key': 'FFmpegExtractAudio',
+        'preferredcodec': 'mp3',
+        'preferredquality': '192',
+    }],
+    'logger': MyLogger(),
+    'progress_hooks': [my_hook],
+}
+with youtube_dl.YoutubeDL(ydl_opts) as ydl:
+    ydl.download(['http://www.youtube.com/watch?v=BaW_jenozKc'])
+```
+
+
+
+BUGS
+
+
+Bugs and suggestions should be reported at:
+https://github.com/rg3/youtube-dl/issues . Unless you were prompted so
+or there is another pertinent reason (e.g. GitHub fails to accept the
+bug report), please do not send bug reports via personal email. For
+discussions, join us in the irc channel #youtube-dl on freenode.
+
+PLEASE INCLUDE THE FULL OUTPUT OF YOUTUBE-DL WHEN RUN WITH -v.
+
+The output (including the first lines) contain important debugging
+information. Issues without the full output are often not reproducible
+and therefore do not get solved in short order, if ever.
+
+Please re-read your issue once again to avoid a couple of common
+mistakes (you can and should use this as a checklist):
+
+Is the description of the issue itself sufficient?
+
+We often get issue reports that we cannot really decipher. While in most
+cases we eventually get the required information after asking back
+multiple times, this poses an unnecessary drain on our resources. Many
+contributors, including myself, are also not native speakers, so we may
+misread some parts.
+
+So please elaborate on what feature you are requesting, or what bug you
+want to be fixed. Make sure that it's obvious
+
+-   What the problem is
+-   How it could be fixed
+-   How your proposed solution would look like
+
+If your report is shorter than two lines, it is almost certainly missing
+some of these, which makes it hard for us to respond to it. We're often
+too polite to close the issue outright, but the missing info makes
+misinterpretation likely. As a commiter myself, I often get frustrated
+by these issues, since the only possible way for me to move forward on
+them is to ask for clarification over and over.
+
+For bug reports, this means that your report should contain the
+_complete_ output of youtube-dl when called with the -v flag. The error
+message you get for (most) bugs even says so, but you would not believe
+how many of our bug reports do not contain this information.
+
+If your server has multiple IPs or you suspect censorship,
+adding --call-home may be a good idea to get more diagnostics. If the
+error is ERROR: Unable to extract ... and you cannot reproduce it from
+multiple countries, add --dump-pages (warning: this will yield a rather
+large output, redirect it to the file log.txt by adding >log.txt 2>&1 to
+your command-line) or upload the .dump files you get when you add
+--write-pages somewhere.
+
+SITE SUPPORT REQUESTS MUST CONTAIN AN EXAMPLE URL. An example URL is a
+URL you might want to download, like
+http://www.youtube.com/watch?v=BaW_jenozKc . There should be an obvious
+video present. Except under very special circumstances, the main page of
+a video service (e.g. http://www.youtube.com/ ) is _not_ an example URL.
+
+Are you using the latest version?
+
+Before reporting any issue, type youtube-dl -U. This should report that
+you're up-to-date. About 20% of the reports we receive are already
+fixed, but people are using outdated versions. This goes for feature
+requests as well.
+
+Is the issue already documented?
+
+Make sure that someone has not already opened the issue you're trying to
+open. Search at the top of the window or at
+https://github.com/rg3/youtube-dl/search?type=Issues . If there is an
+issue, feel free to write something along the lines of "This affects me
+as well, with version 2015.01.01. Here is some more information on the
+issue: ...". While some issues may be old, a new post into them often
+spurs rapid activity.
+
+Why are existing options not enough?
+
+Before requesting a new feature, please have a quick peek at the list of
+supported options. Many feature requests are for features that actually
+exist already! Please, absolutely do show off your work in the issue
+report and detail how the existing similar options do _not_ solve your
+problem.
+
+Is there enough context in your bug report?
+
+People want to solve problems, and often think they do us a favor by
+breaking down their larger problems (e.g. wanting to skip already
+downloaded files) to a specific request (e.g. requesting us to look
+whether the file exists before downloading the info page). However, what
+often happens is that they break down the problem into two steps: One
+simple, and one impossible (or extremely complicated one).
+
+We are then presented with a very complicated request when the original
+problem could be solved far easier, e.g. by recording the downloaded
+video IDs in a separate file. To avoid this, you must include the
+greater context where it is non-obvious. In particular, every feature
+request that does not consist of adding support for a new site should
+contain a use case scenario that explains in what situation the missing
+feature would be useful.
+
+Does the issue involve one problem, and one problem only?
+
+Some of our users seem to think there is a limit of issues they can or
+should open. There is no limit of issues they can or should open. While
+it may seem appealing to be able to dump all your issues into one
+ticket, that means that someone who solves one of your issues cannot
+mark the issue as closed. Typically, reporting a bunch of issues leads
+to the ticket lingering since nobody wants to attack that behemoth,
+until someone mercifully splits the issue into multiple ones.
+
+In particular, every site support request issue should only pertain to
+services at one site (generally under a common domain, but always using
+the same backend technology). Do not request support for vimeo user
+videos, Whitehouse podcasts, and Google Plus pages in the same issue.
+Also, make sure that you don't post bug reports alongside feature
+requests. As a rule of thumb, a feature request does not include outputs
+of youtube-dl that are not immediately related to the feature at hand.
+Do not post reports of a network error alongside the request for a new
+video service.
+
+Is anyone going to need the feature?
+
+Only post features that you (or an incapacitated friend you can
+personally talk to) require. Do not post features because they seem like
+a good idea. If they are really useful, they will be requested by
+someone who requires them.
+
+Is your question about youtube-dl?
+
+It may sound strange, but some bug reports we receive are completely
+unrelated to youtube-dl and relate to a different or even the reporter's
+own application. Please make sure that you are actually using
+youtube-dl. If you are using a UI for youtube-dl, report the bug to the
+maintainer of the actual application providing the UI. On the other
+hand, if your UI for youtube-dl fails in some way you believe is related
+to youtube-dl, by all means, go ahead and report the bug.
+
+
+
+COPYRIGHT
+
+
+youtube-dl is released into the public domain by the copyright holders.
+
+This README file was originally written by Daniel Bolton
+(https://github.com/dbbolton) and is likewise released into the public
+domain.

=== added directory 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl'
=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/YoutubeDL.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/YoutubeDL.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/YoutubeDL.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,1889 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+
+from __future__ import absolute_import, unicode_literals
+
+import collections
+import contextlib
+import datetime
+import errno
+import fileinput
+import io
+import itertools
+import json
+import locale
+import operator
+import os
+import platform
+import re
+import shutil
+import subprocess
+import socket
+import sys
+import time
+import traceback
+
+if os.name == 'nt':
+    import ctypes
+
+from .compat import (
+    compat_basestring,
+    compat_cookiejar,
+    compat_expanduser,
+    compat_get_terminal_size,
+    compat_http_client,
+    compat_kwargs,
+    compat_str,
+    compat_urllib_error,
+    compat_urllib_request,
+)
+from .utils import (
+    escape_url,
+    ContentTooShortError,
+    date_from_str,
+    DateRange,
+    DEFAULT_OUTTMPL,
+    determine_ext,
+    DownloadError,
+    encodeFilename,
+    ExtractorError,
+    format_bytes,
+    formatSeconds,
+    HEADRequest,
+    locked_file,
+    make_HTTPS_handler,
+    MaxDownloadsReached,
+    PagedList,
+    parse_filesize,
+    PerRequestProxyHandler,
+    PostProcessingError,
+    platform_name,
+    preferredencoding,
+    render_table,
+    SameFileError,
+    sanitize_filename,
+    sanitize_path,
+    std_headers,
+    subtitles_filename,
+    UnavailableVideoError,
+    url_basename,
+    version_tuple,
+    write_json_file,
+    write_string,
+    YoutubeDLHandler,
+    prepend_extension,
+    replace_extension,
+    args_to_str,
+    age_restricted,
+)
+from .cache import Cache
+from .extractor import get_info_extractor, gen_extractors
+from .downloader import get_suitable_downloader
+from .downloader.rtmp import rtmpdump_version
+from .postprocessor import (
+    FFmpegFixupM4aPP,
+    FFmpegFixupStretchedPP,
+    FFmpegMergerPP,
+    FFmpegPostProcessor,
+    get_postprocessor,
+)
+from .version import __version__
+
+
+class YoutubeDL(object):
+    """YoutubeDL class.
+
+    YoutubeDL objects are the ones responsible of downloading the
+    actual video file and writing it to disk if the user has requested
+    it, among some other tasks. In most cases there should be one per
+    program. As, given a video URL, the downloader doesn't know how to
+    extract all the needed information, task that InfoExtractors do, it
+    has to pass the URL to one of them.
+
+    For this, YoutubeDL objects have a method that allows
+    InfoExtractors to be registered in a given order. When it is passed
+    a URL, the YoutubeDL object handles it to the first InfoExtractor it
+    finds that reports being able to handle it. The InfoExtractor extracts
+    all the information about the video or videos the URL refers to, and
+    YoutubeDL process the extracted information, possibly using a File
+    Downloader to download the video.
+
+    YoutubeDL objects accept a lot of parameters. In order not to saturate
+    the object constructor with arguments, it receives a dictionary of
+    options instead. These options are available through the params
+    attribute for the InfoExtractors to use. The YoutubeDL also
+    registers itself as the downloader in charge for the InfoExtractors
+    that are added to it, so this is a "mutual registration".
+
+    Available options:
+
+    username:          Username for authentication purposes.
+    password:          Password for authentication purposes.
+    videopassword:     Password for accessing a video.
+    usenetrc:          Use netrc for authentication instead.
+    verbose:           Print additional info to stdout.
+    quiet:             Do not print messages to stdout.
+    no_warnings:       Do not print out anything for warnings.
+    forceurl:          Force printing final URL.
+    forcetitle:        Force printing title.
+    forceid:           Force printing ID.
+    forcethumbnail:    Force printing thumbnail URL.
+    forcedescription:  Force printing description.
+    forcefilename:     Force printing final filename.
+    forceduration:     Force printing duration.
+    forcejson:         Force printing info_dict as JSON.
+    dump_single_json:  Force printing the info_dict of the whole playlist
+                       (or video) as a single JSON line.
+    simulate:          Do not download the video files.
+    format:            Video format code. See options.py for more information.
+    outtmpl:           Template for output names.
+    restrictfilenames: Do not allow "&" and spaces in file names
+    ignoreerrors:      Do not stop on download errors.
+    force_generic_extractor: Force downloader to use the generic extractor
+    nooverwrites:      Prevent overwriting files.
+    playliststart:     Playlist item to start at.
+    playlistend:       Playlist item to end at.
+    playlist_items:    Specific indices of playlist to download.
+    playlistreverse:   Download playlist items in reverse order.
+    matchtitle:        Download only matching titles.
+    rejecttitle:       Reject downloads for matching titles.
+    logger:            Log messages to a logging.Logger instance.
+    logtostderr:       Log messages to stderr instead of stdout.
+    writedescription:  Write the video description to a .description file
+    writeinfojson:     Write the video description to a .info.json file
+    writeannotations:  Write the video annotations to a .annotations.xml file
+    writethumbnail:    Write the thumbnail image to a file
+    write_all_thumbnails:  Write all thumbnail formats to files
+    writesubtitles:    Write the video subtitles to a file
+    writeautomaticsub: Write the automatic subtitles to a file
+    allsubtitles:      Downloads all the subtitles of the video
+                       (requires writesubtitles or writeautomaticsub)
+    listsubtitles:     Lists all available subtitles for the video
+    subtitlesformat:   The format code for subtitles
+    subtitleslangs:    List of languages of the subtitles to download
+    keepvideo:         Keep the video file after post-processing
+    daterange:         A DateRange object, download only if the upload_date is in the range.
+    skip_download:     Skip the actual download of the video file
+    cachedir:          Location of the cache files in the filesystem.
+                       False to disable filesystem cache.
+    noplaylist:        Download single video instead of a playlist if in doubt.
+    age_limit:         An integer representing the user's age in years.
+                       Unsuitable videos for the given age are skipped.
+    min_views:         An integer representing the minimum view count the video
+                       must have in order to not be skipped.
+                       Videos without view count information are always
+                       downloaded. None for no limit.
+    max_views:         An integer representing the maximum view count.
+                       Videos that are more popular than that are not
+                       downloaded.
+                       Videos without view count information are always
+                       downloaded. None for no limit.
+    download_archive:  File name of a file where all downloads are recorded.
+                       Videos already present in the file are not downloaded
+                       again.
+    cookiefile:        File name where cookies should be read from and dumped to.
+    nocheckcertificate:Do not verify SSL certificates
+    prefer_insecure:   Use HTTP instead of HTTPS to retrieve information.
+                       At the moment, this is only supported by YouTube.
+    proxy:             URL of the proxy server to use
+    cn_verification_proxy:  URL of the proxy to use for IP address verification
+                       on Chinese sites. (Experimental)
+    socket_timeout:    Time to wait for unresponsive hosts, in seconds
+    bidi_workaround:   Work around buggy terminals without bidirectional text
+                       support, using fridibi
+    debug_printtraffic:Print out sent and received HTTP traffic
+    include_ads:       Download ads as well
+    default_search:    Prepend this string if an input url is not valid.
+                       'auto' for elaborate guessing
+    encoding:          Use this encoding instead of the system-specified.
+    extract_flat:      Do not resolve URLs, return the immediate result.
+                       Pass in 'in_playlist' to only show this behavior for
+                       playlist items.
+    postprocessors:    A list of dictionaries, each with an entry
+                       * key:  The name of the postprocessor. See
+                               youtube_dl/postprocessor/__init__.py for a list.
+                       as well as any further keyword arguments for the
+                       postprocessor.
+    progress_hooks:    A list of functions that get called on download
+                       progress, with a dictionary with the entries
+                       * status: One of "downloading", "error", or "finished".
+                                 Check this first and ignore unknown values.
+
+                       If status is one of "downloading", or "finished", the
+                       following properties may also be present:
+                       * filename: The final filename (always present)
+                       * tmpfilename: The filename we're currently writing to
+                       * downloaded_bytes: Bytes on disk
+                       * total_bytes: Size of the whole file, None if unknown
+                       * total_bytes_estimate: Guess of the eventual file size,
+                                               None if unavailable.
+                       * elapsed: The number of seconds since download started.
+                       * eta: The estimated time in seconds, None if unknown
+                       * speed: The download speed in bytes/second, None if
+                                unknown
+                       * fragment_index: The counter of the currently
+                                         downloaded video fragment.
+                       * fragment_count: The number of fragments (= individual
+                                         files that will be merged)
+
+                       Progress hooks are guaranteed to be called at least once
+                       (with status "finished") if the download is successful.
+    merge_output_format: Extension to use when merging formats.
+    fixup:             Automatically correct known faults of the file.
+                       One of:
+                       - "never": do nothing
+                       - "warn": only emit a warning
+                       - "detect_or_warn": check whether we can do anything
+                                           about it, warn otherwise (default)
+    source_address:    (Experimental) Client-side IP address to bind to.
+    call_home:         Boolean, true iff we are allowed to contact the
+                       youtube-dl servers for debugging.
+    sleep_interval:    Number of seconds to sleep before each download.
+    listformats:       Print an overview of available video formats and exit.
+    list_thumbnails:   Print a table of all thumbnails and exit.
+    match_filter:      A function that gets called with the info_dict of
+                       every video.
+                       If it returns a message, the video is ignored.
+                       If it returns None, the video is downloaded.
+                       match_filter_func in utils.py is one example for this.
+    no_color:          Do not emit color codes in output.
+
+    The following options determine which downloader is picked:
+    external_downloader: Executable of the external downloader to call.
+                       None or unset for standard (built-in) downloader.
+    hls_prefer_native: Use the native HLS downloader instead of ffmpeg/avconv.
+
+    The following parameters are not used by YoutubeDL itself, they are used by
+    the downloader (see youtube_dl/downloader/common.py):
+    nopart, updatetime, buffersize, ratelimit, min_filesize, max_filesize, test,
+    noresizebuffer, retries, continuedl, noprogress, consoletitle,
+    xattr_set_filesize, external_downloader_args.
+
+    The following options are used by the post processors:
+    prefer_ffmpeg:     If True, use ffmpeg instead of avconv if both are available,
+                       otherwise prefer avconv.
+    postprocessor_args: A list of additional command-line arguments for the
+                        postprocessor.
+    """
+
+    params = None
+    _ies = []
+    _pps = []
+    _download_retcode = None
+    _num_downloads = None
+    _screen_file = None
+
+    def __init__(self, params=None, auto_init=True):
+        """Create a FileDownloader object with the given options."""
+        if params is None:
+            params = {}
+        self._ies = []
+        self._ies_instances = {}
+        self._pps = []
+        self._progress_hooks = []
+        self._download_retcode = 0
+        self._num_downloads = 0
+        self._screen_file = [sys.stdout, sys.stderr][params.get('logtostderr', False)]
+        self._err_file = sys.stderr
+        self.params = params
+        self.cache = Cache(self)
+
+        if params.get('bidi_workaround', False):
+            try:
+                import pty
+                master, slave = pty.openpty()
+                width = compat_get_terminal_size().columns
+                if width is None:
+                    width_args = []
+                else:
+                    width_args = ['-w', str(width)]
+                sp_kwargs = dict(
+                    stdin=subprocess.PIPE,
+                    stdout=slave,
+                    stderr=self._err_file)
+                try:
+                    self._output_process = subprocess.Popen(
+                        ['bidiv'] + width_args, **sp_kwargs
+                    )
+                except OSError:
+                    self._output_process = subprocess.Popen(
+                        ['fribidi', '-c', 'UTF-8'] + width_args, **sp_kwargs)
+                self._output_channel = os.fdopen(master, 'rb')
+            except OSError as ose:
+                if ose.errno == 2:
+                    self.report_warning('Could not find fribidi executable, ignoring --bidi-workaround . Make sure that  fribidi  is an executable file in one of the directories in your $PATH.')
+                else:
+                    raise
+
+        if (sys.version_info >= (3,) and sys.platform != 'win32' and
+                sys.getfilesystemencoding() in ['ascii', 'ANSI_X3.4-1968'] and
+                not params.get('restrictfilenames', False)):
+            # On Python 3, the Unicode filesystem API will throw errors (#1474)
+            self.report_warning(
+                'Assuming --restrict-filenames since file system encoding '
+                'cannot encode all characters. '
+                'Set the LC_ALL environment variable to fix this.')
+            self.params['restrictfilenames'] = True
+
+        if isinstance(params.get('outtmpl'), bytes):
+            self.report_warning(
+                'Parameter outtmpl is bytes, but should be a unicode string. '
+                'Put  from __future__ import unicode_literals  at the top of your code file or consider switching to Python 3.x.')
+
+        self._setup_opener()
+
+        if auto_init:
+            self.print_debug_header()
+            self.add_default_info_extractors()
+
+        for pp_def_raw in self.params.get('postprocessors', []):
+            pp_class = get_postprocessor(pp_def_raw['key'])
+            pp_def = dict(pp_def_raw)
+            del pp_def['key']
+            pp = pp_class(self, **compat_kwargs(pp_def))
+            self.add_post_processor(pp)
+
+        for ph in self.params.get('progress_hooks', []):
+            self.add_progress_hook(ph)
+
+    def warn_if_short_id(self, argv):
+        # short YouTube ID starting with dash?
+        idxs = [
+            i for i, a in enumerate(argv)
+            if re.match(r'^-[0-9A-Za-z_-]{10}$', a)]
+        if idxs:
+            correct_argv = (
+                ['youtube-dl'] +
+                [a for i, a in enumerate(argv) if i not in idxs] +
+                ['--'] + [argv[i] for i in idxs]
+            )
+            self.report_warning(
+                'Long argument string detected. '
+                'Use -- to separate parameters and URLs, like this:\n%s\n' %
+                args_to_str(correct_argv))
+
+    def add_info_extractor(self, ie):
+        """Add an InfoExtractor object to the end of the list."""
+        self._ies.append(ie)
+        self._ies_instances[ie.ie_key()] = ie
+        ie.set_downloader(self)
+
+    def get_info_extractor(self, ie_key):
+        """
+        Get an instance of an IE with name ie_key, it will try to get one from
+        the _ies list, if there's no instance it will create a new one and add
+        it to the extractor list.
+        """
+        ie = self._ies_instances.get(ie_key)
+        if ie is None:
+            ie = get_info_extractor(ie_key)()
+            self.add_info_extractor(ie)
+        return ie
+
+    def add_default_info_extractors(self):
+        """
+        Add the InfoExtractors returned by gen_extractors to the end of the list
+        """
+        for ie in gen_extractors():
+            self.add_info_extractor(ie)
+
+    def add_post_processor(self, pp):
+        """Add a PostProcessor object to the end of the chain."""
+        self._pps.append(pp)
+        pp.set_downloader(self)
+
+    def add_progress_hook(self, ph):
+        """Add the progress hook (currently only for the file downloader)"""
+        self._progress_hooks.append(ph)
+
+    def _bidi_workaround(self, message):
+        if not hasattr(self, '_output_channel'):
+            return message
+
+        assert hasattr(self, '_output_process')
+        assert isinstance(message, compat_str)
+        line_count = message.count('\n') + 1
+        self._output_process.stdin.write((message + '\n').encode('utf-8'))
+        self._output_process.stdin.flush()
+        res = ''.join(self._output_channel.readline().decode('utf-8')
+                      for _ in range(line_count))
+        return res[:-len('\n')]
+
+    def to_screen(self, message, skip_eol=False):
+        """Print message to stdout if not in quiet mode."""
+        return self.to_stdout(message, skip_eol, check_quiet=True)
+
+    def _write_string(self, s, out=None):
+        write_string(s, out=out, encoding=self.params.get('encoding'))
+
+    def to_stdout(self, message, skip_eol=False, check_quiet=False):
+        """Print message to stdout if not in quiet mode."""
+        if self.params.get('logger'):
+            self.params['logger'].debug(message)
+        elif not check_quiet or not self.params.get('quiet', False):
+            message = self._bidi_workaround(message)
+            terminator = ['\n', ''][skip_eol]
+            output = message + terminator
+
+            self._write_string(output, self._screen_file)
+
+    def to_stderr(self, message):
+        """Print message to stderr."""
+        assert isinstance(message, compat_str)
+        if self.params.get('logger'):
+            self.params['logger'].error(message)
+        else:
+            message = self._bidi_workaround(message)
+            output = message + '\n'
+            self._write_string(output, self._err_file)
+
+    def to_console_title(self, message):
+        if not self.params.get('consoletitle', False):
+            return
+        if os.name == 'nt' and ctypes.windll.kernel32.GetConsoleWindow():
+            # c_wchar_p() might not be necessary if `message` is
+            # already of type unicode()
+            ctypes.windll.kernel32.SetConsoleTitleW(ctypes.c_wchar_p(message))
+        elif 'TERM' in os.environ:
+            self._write_string('\033]0;%s\007' % message, self._screen_file)
+
+    def save_console_title(self):
+        if not self.params.get('consoletitle', False):
+            return
+        if 'TERM' in os.environ:
+            # Save the title on stack
+            self._write_string('\033[22;0t', self._screen_file)
+
+    def restore_console_title(self):
+        if not self.params.get('consoletitle', False):
+            return
+        if 'TERM' in os.environ:
+            # Restore the title from stack
+            self._write_string('\033[23;0t', self._screen_file)
+
+    def __enter__(self):
+        self.save_console_title()
+        return self
+
+    def __exit__(self, *args):
+        self.restore_console_title()
+
+        if self.params.get('cookiefile') is not None:
+            self.cookiejar.save()
+
+    def trouble(self, message=None, tb=None):
+        """Determine action to take when a download problem appears.
+
+        Depending on if the downloader has been configured to ignore
+        download errors or not, this method may throw an exception or
+        not when errors are found, after printing the message.
+
+        tb, if given, is additional traceback information.
+        """
+        if message is not None:
+            self.to_stderr(message)
+        if self.params.get('verbose'):
+            if tb is None:
+                if sys.exc_info()[0]:  # if .trouble has been called from an except block
+                    tb = ''
+                    if hasattr(sys.exc_info()[1], 'exc_info') and sys.exc_info()[1].exc_info[0]:
+                        tb += ''.join(traceback.format_exception(*sys.exc_info()[1].exc_info))
+                    tb += compat_str(traceback.format_exc())
+                else:
+                    tb_data = traceback.format_list(traceback.extract_stack())
+                    tb = ''.join(tb_data)
+            self.to_stderr(tb)
+        if not self.params.get('ignoreerrors', False):
+            if sys.exc_info()[0] and hasattr(sys.exc_info()[1], 'exc_info') and sys.exc_info()[1].exc_info[0]:
+                exc_info = sys.exc_info()[1].exc_info
+            else:
+                exc_info = sys.exc_info()
+            raise DownloadError(message, exc_info)
+        self._download_retcode = 1
+
+    def report_warning(self, message):
+        '''
+        Print the message to stderr, it will be prefixed with 'WARNING:'
+        If stderr is a tty file the 'WARNING:' will be colored
+        '''
+        if self.params.get('logger') is not None:
+            self.params['logger'].warning(message)
+        else:
+            if self.params.get('no_warnings'):
+                return
+            if not self.params.get('no_color') and self._err_file.isatty() and os.name != 'nt':
+                _msg_header = '\033[0;33mWARNING:\033[0m'
+            else:
+                _msg_header = 'WARNING:'
+            warning_message = '%s %s' % (_msg_header, message)
+            self.to_stderr(warning_message)
+
+    def report_error(self, message, tb=None):
+        '''
+        Do the same as trouble, but prefixes the message with 'ERROR:', colored
+        in red if stderr is a tty file.
+        '''
+        if not self.params.get('no_color') and self._err_file.isatty() and os.name != 'nt':
+            _msg_header = '\033[0;31mERROR:\033[0m'
+        else:
+            _msg_header = 'ERROR:'
+        error_message = '%s %s' % (_msg_header, message)
+        self.trouble(error_message, tb)
+
+    def report_file_already_downloaded(self, file_name):
+        """Report file has already been fully downloaded."""
+        try:
+            self.to_screen('[download] %s has already been downloaded' % file_name)
+        except UnicodeEncodeError:
+            self.to_screen('[download] The file has already been downloaded')
+
+    def prepare_filename(self, info_dict):
+        """Generate the output filename."""
+        try:
+            template_dict = dict(info_dict)
+
+            template_dict['epoch'] = int(time.time())
+            autonumber_size = self.params.get('autonumber_size')
+            if autonumber_size is None:
+                autonumber_size = 5
+            autonumber_templ = '%0' + str(autonumber_size) + 'd'
+            template_dict['autonumber'] = autonumber_templ % self._num_downloads
+            if template_dict.get('playlist_index') is not None:
+                template_dict['playlist_index'] = '%0*d' % (len(str(template_dict['n_entries'])), template_dict['playlist_index'])
+            if template_dict.get('resolution') is None:
+                if template_dict.get('width') and template_dict.get('height'):
+                    template_dict['resolution'] = '%dx%d' % (template_dict['width'], template_dict['height'])
+                elif template_dict.get('height'):
+                    template_dict['resolution'] = '%sp' % template_dict['height']
+                elif template_dict.get('width'):
+                    template_dict['resolution'] = '?x%d' % template_dict['width']
+
+            sanitize = lambda k, v: sanitize_filename(
+                compat_str(v),
+                restricted=self.params.get('restrictfilenames'),
+                is_id=(k == 'id'))
+            template_dict = dict((k, sanitize(k, v))
+                                 for k, v in template_dict.items()
+                                 if v is not None)
+            template_dict = collections.defaultdict(lambda: 'NA', template_dict)
+
+            outtmpl = sanitize_path(self.params.get('outtmpl', DEFAULT_OUTTMPL))
+            tmpl = compat_expanduser(outtmpl)
+            filename = tmpl % template_dict
+            # Temporary fix for #4787
+            # 'Treat' all problem characters by passing filename through preferredencoding
+            # to workaround encoding issues with subprocess on python2 @ Windows
+            if sys.version_info < (3, 0) and sys.platform == 'win32':
+                filename = encodeFilename(filename, True).decode(preferredencoding())
+            return filename
+        except ValueError as err:
+            self.report_error('Error in output template: ' + str(err) + ' (encoding: ' + repr(preferredencoding()) + ')')
+            return None
+
+    def _match_entry(self, info_dict, incomplete):
+        """ Returns None iff the file should be downloaded """
+
+        video_title = info_dict.get('title', info_dict.get('id', 'video'))
+        if 'title' in info_dict:
+            # This can happen when we're just evaluating the playlist
+            title = info_dict['title']
+            matchtitle = self.params.get('matchtitle', False)
+            if matchtitle:
+                if not re.search(matchtitle, title, re.IGNORECASE):
+                    return '"' + title + '" title did not match pattern "' + matchtitle + '"'
+            rejecttitle = self.params.get('rejecttitle', False)
+            if rejecttitle:
+                if re.search(rejecttitle, title, re.IGNORECASE):
+                    return '"' + title + '" title matched reject pattern "' + rejecttitle + '"'
+        date = info_dict.get('upload_date', None)
+        if date is not None:
+            dateRange = self.params.get('daterange', DateRange())
+            if date not in dateRange:
+                return '%s upload date is not in range %s' % (date_from_str(date).isoformat(), dateRange)
+        view_count = info_dict.get('view_count', None)
+        if view_count is not None:
+            min_views = self.params.get('min_views')
+            if min_views is not None and view_count < min_views:
+                return 'Skipping %s, because it has not reached minimum view count (%d/%d)' % (video_title, view_count, min_views)
+            max_views = self.params.get('max_views')
+            if max_views is not None and view_count > max_views:
+                return 'Skipping %s, because it has exceeded the maximum view count (%d/%d)' % (video_title, view_count, max_views)
+        if age_restricted(info_dict.get('age_limit'), self.params.get('age_limit')):
+            return 'Skipping "%s" because it is age restricted' % video_title
+        if self.in_download_archive(info_dict):
+            return '%s has already been recorded in archive' % video_title
+
+        if not incomplete:
+            match_filter = self.params.get('match_filter')
+            if match_filter is not None:
+                ret = match_filter(info_dict)
+                if ret is not None:
+                    return ret
+
+        return None
+
+    @staticmethod
+    def add_extra_info(info_dict, extra_info):
+        '''Set the keys from extra_info in info dict if they are missing'''
+        for key, value in extra_info.items():
+            info_dict.setdefault(key, value)
+
+    def extract_info(self, url, download=True, ie_key=None, extra_info={},
+                     process=True, force_generic_extractor=False):
+        '''
+        Returns a list with a dictionary for each video we find.
+        If 'download', also downloads the videos.
+        extra_info is a dict containing the extra values to add to each result
+        '''
+
+        if not ie_key and force_generic_extractor:
+            ie_key = 'Generic'
+
+        if ie_key:
+            ies = [self.get_info_extractor(ie_key)]
+        else:
+            ies = self._ies
+
+        for ie in ies:
+            if not ie.suitable(url):
+                continue
+
+            if not ie.working():
+                self.report_warning('The program functionality for this site has been marked as broken, '
+                                    'and will probably not work.')
+
+            try:
+                ie_result = ie.extract(url)
+                if ie_result is None:  # Finished already (backwards compatibility; listformats and friends should be moved here)
+                    break
+                if isinstance(ie_result, list):
+                    # Backwards compatibility: old IE result format
+                    ie_result = {
+                        '_type': 'compat_list',
+                        'entries': ie_result,
+                    }
+                self.add_default_extra_info(ie_result, ie, url)
+                if process:
+                    return self.process_ie_result(ie_result, download, extra_info)
+                else:
+                    return ie_result
+            except ExtractorError as de:  # An error we somewhat expected
+                self.report_error(compat_str(de), de.format_traceback())
+                break
+            except MaxDownloadsReached:
+                raise
+            except Exception as e:
+                if self.params.get('ignoreerrors', False):
+                    self.report_error(compat_str(e), tb=compat_str(traceback.format_exc()))
+                    break
+                else:
+                    raise
+        else:
+            self.report_error('no suitable InfoExtractor for URL %s' % url)
+
+    def add_default_extra_info(self, ie_result, ie, url):
+        self.add_extra_info(ie_result, {
+            'extractor': ie.IE_NAME,
+            'webpage_url': url,
+            'webpage_url_basename': url_basename(url),
+            'extractor_key': ie.ie_key(),
+        })
+
+    def process_ie_result(self, ie_result, download=True, extra_info={}):
+        """
+        Take the result of the ie(may be modified) and resolve all unresolved
+        references (URLs, playlist items).
+
+        It will also download the videos if 'download'.
+        Returns the resolved ie_result.
+        """
+
+        result_type = ie_result.get('_type', 'video')
+
+        if result_type in ('url', 'url_transparent'):
+            extract_flat = self.params.get('extract_flat', False)
+            if ((extract_flat == 'in_playlist' and 'playlist' in extra_info) or
+                    extract_flat is True):
+                if self.params.get('forcejson', False):
+                    self.to_stdout(json.dumps(ie_result))
+                return ie_result
+
+        if result_type == 'video':
+            self.add_extra_info(ie_result, extra_info)
+            return self.process_video_result(ie_result, download=download)
+        elif result_type == 'url':
+            # We have to add extra_info to the results because it may be
+            # contained in a playlist
+            return self.extract_info(ie_result['url'],
+                                     download,
+                                     ie_key=ie_result.get('ie_key'),
+                                     extra_info=extra_info)
+        elif result_type == 'url_transparent':
+            # Use the information from the embedding page
+            info = self.extract_info(
+                ie_result['url'], ie_key=ie_result.get('ie_key'),
+                extra_info=extra_info, download=False, process=False)
+
+            force_properties = dict(
+                (k, v) for k, v in ie_result.items() if v is not None)
+            for f in ('_type', 'url'):
+                if f in force_properties:
+                    del force_properties[f]
+            new_result = info.copy()
+            new_result.update(force_properties)
+
+            assert new_result.get('_type') != 'url_transparent'
+
+            return self.process_ie_result(
+                new_result, download=download, extra_info=extra_info)
+        elif result_type == 'playlist' or result_type == 'multi_video':
+            # We process each entry in the playlist
+            playlist = ie_result.get('title', None) or ie_result.get('id', None)
+            self.to_screen('[download] Downloading playlist: %s' % playlist)
+
+            playlist_results = []
+
+            playliststart = self.params.get('playliststart', 1) - 1
+            playlistend = self.params.get('playlistend', None)
+            # For backwards compatibility, interpret -1 as whole list
+            if playlistend == -1:
+                playlistend = None
+
+            playlistitems_str = self.params.get('playlist_items', None)
+            playlistitems = None
+            if playlistitems_str is not None:
+                def iter_playlistitems(format):
+                    for string_segment in format.split(','):
+                        if '-' in string_segment:
+                            start, end = string_segment.split('-')
+                            for item in range(int(start), int(end) + 1):
+                                yield int(item)
+                        else:
+                            yield int(string_segment)
+                playlistitems = iter_playlistitems(playlistitems_str)
+
+            ie_entries = ie_result['entries']
+            if isinstance(ie_entries, list):
+                n_all_entries = len(ie_entries)
+                if playlistitems:
+                    entries = [
+                        ie_entries[i - 1] for i in playlistitems
+                        if -n_all_entries <= i - 1 < n_all_entries]
+                else:
+                    entries = ie_entries[playliststart:playlistend]
+                n_entries = len(entries)
+                self.to_screen(
+                    "[%s] playlist %s: Collected %d video ids (downloading %d of them)" %
+                    (ie_result['extractor'], playlist, n_all_entries, n_entries))
+            elif isinstance(ie_entries, PagedList):
+                if playlistitems:
+                    entries = []
+                    for item in playlistitems:
+                        entries.extend(ie_entries.getslice(
+                            item - 1, item
+                        ))
+                else:
+                    entries = ie_entries.getslice(
+                        playliststart, playlistend)
+                n_entries = len(entries)
+                self.to_screen(
+                    "[%s] playlist %s: Downloading %d videos" %
+                    (ie_result['extractor'], playlist, n_entries))
+            else:  # iterable
+                if playlistitems:
+                    entry_list = list(ie_entries)
+                    entries = [entry_list[i - 1] for i in playlistitems]
+                else:
+                    entries = list(itertools.islice(
+                        ie_entries, playliststart, playlistend))
+                n_entries = len(entries)
+                self.to_screen(
+                    "[%s] playlist %s: Downloading %d videos" %
+                    (ie_result['extractor'], playlist, n_entries))
+
+            if self.params.get('playlistreverse', False):
+                entries = entries[::-1]
+
+            for i, entry in enumerate(entries, 1):
+                self.to_screen('[download] Downloading video %s of %s' % (i, n_entries))
+                extra = {
+                    'n_entries': n_entries,
+                    'playlist': playlist,
+                    'playlist_id': ie_result.get('id'),
+                    'playlist_title': ie_result.get('title'),
+                    'playlist_index': i + playliststart,
+                    'extractor': ie_result['extractor'],
+                    'webpage_url': ie_result['webpage_url'],
+                    'webpage_url_basename': url_basename(ie_result['webpage_url']),
+                    'extractor_key': ie_result['extractor_key'],
+                }
+
+                reason = self._match_entry(entry, incomplete=True)
+                if reason is not None:
+                    self.to_screen('[download] ' + reason)
+                    continue
+
+                entry_result = self.process_ie_result(entry,
+                                                      download=download,
+                                                      extra_info=extra)
+                playlist_results.append(entry_result)
+            ie_result['entries'] = playlist_results
+            return ie_result
+        elif result_type == 'compat_list':
+            self.report_warning(
+                'Extractor %s returned a compat_list result. '
+                'It needs to be updated.' % ie_result.get('extractor'))
+
+            def _fixup(r):
+                self.add_extra_info(
+                    r,
+                    {
+                        'extractor': ie_result['extractor'],
+                        'webpage_url': ie_result['webpage_url'],
+                        'webpage_url_basename': url_basename(ie_result['webpage_url']),
+                        'extractor_key': ie_result['extractor_key'],
+                    }
+                )
+                return r
+            ie_result['entries'] = [
+                self.process_ie_result(_fixup(r), download, extra_info)
+                for r in ie_result['entries']
+            ]
+            return ie_result
+        else:
+            raise Exception('Invalid result type: %s' % result_type)
+
+    def _apply_format_filter(self, format_spec, available_formats):
+        " Returns a tuple of the remaining format_spec and filtered formats "
+
+        OPERATORS = {
+            '<': operator.lt,
+            '<=': operator.le,
+            '>': operator.gt,
+            '>=': operator.ge,
+            '=': operator.eq,
+            '!=': operator.ne,
+        }
+        operator_rex = re.compile(r'''(?x)\s*\[
+            (?P<key>width|height|tbr|abr|vbr|asr|filesize|fps)
+            \s*(?P<op>%s)(?P<none_inclusive>\s*\?)?\s*
+            (?P<value>[0-9.]+(?:[kKmMgGtTpPeEzZyY]i?[Bb]?)?)
+            \]$
+            ''' % '|'.join(map(re.escape, OPERATORS.keys())))
+        m = operator_rex.search(format_spec)
+        if m:
+            try:
+                comparison_value = int(m.group('value'))
+            except ValueError:
+                comparison_value = parse_filesize(m.group('value'))
+                if comparison_value is None:
+                    comparison_value = parse_filesize(m.group('value') + 'B')
+                if comparison_value is None:
+                    raise ValueError(
+                        'Invalid value %r in format specification %r' % (
+                            m.group('value'), format_spec))
+            op = OPERATORS[m.group('op')]
+
+        if not m:
+            STR_OPERATORS = {
+                '=': operator.eq,
+                '!=': operator.ne,
+            }
+            str_operator_rex = re.compile(r'''(?x)\s*\[
+                \s*(?P<key>ext|acodec|vcodec|container|protocol)
+                \s*(?P<op>%s)(?P<none_inclusive>\s*\?)?
+                \s*(?P<value>[a-zA-Z0-9_-]+)
+                \s*\]$
+                ''' % '|'.join(map(re.escape, STR_OPERATORS.keys())))
+            m = str_operator_rex.search(format_spec)
+            if m:
+                comparison_value = m.group('value')
+                op = STR_OPERATORS[m.group('op')]
+
+        if not m:
+            raise ValueError('Invalid format specification %r' % format_spec)
+
+        def _filter(f):
+            actual_value = f.get(m.group('key'))
+            if actual_value is None:
+                return m.group('none_inclusive')
+            return op(actual_value, comparison_value)
+        new_formats = [f for f in available_formats if _filter(f)]
+
+        new_format_spec = format_spec[:-len(m.group(0))]
+        if not new_format_spec:
+            new_format_spec = 'best'
+
+        return (new_format_spec, new_formats)
+
+    def select_format(self, format_spec, available_formats):
+        while format_spec.endswith(']'):
+            format_spec, available_formats = self._apply_format_filter(
+                format_spec, available_formats)
+        if not available_formats:
+            return None
+
+        if format_spec in ['best', 'worst', None]:
+            format_idx = 0 if format_spec == 'worst' else -1
+            audiovideo_formats = [
+                f for f in available_formats
+                if f.get('vcodec') != 'none' and f.get('acodec') != 'none']
+            if audiovideo_formats:
+                return audiovideo_formats[format_idx]
+            # for audio only (soundcloud) or video only (imgur) urls, select the best/worst audio format
+            elif (all(f.get('acodec') != 'none' for f in available_formats) or
+                  all(f.get('vcodec') != 'none' for f in available_formats)):
+                return available_formats[format_idx]
+        elif format_spec == 'bestaudio':
+            audio_formats = [
+                f for f in available_formats
+                if f.get('vcodec') == 'none']
+            if audio_formats:
+                return audio_formats[-1]
+        elif format_spec == 'worstaudio':
+            audio_formats = [
+                f for f in available_formats
+                if f.get('vcodec') == 'none']
+            if audio_formats:
+                return audio_formats[0]
+        elif format_spec == 'bestvideo':
+            video_formats = [
+                f for f in available_formats
+                if f.get('acodec') == 'none']
+            if video_formats:
+                return video_formats[-1]
+        elif format_spec == 'worstvideo':
+            video_formats = [
+                f for f in available_formats
+                if f.get('acodec') == 'none']
+            if video_formats:
+                return video_formats[0]
+        else:
+            extensions = ['mp4', 'flv', 'webm', '3gp', 'm4a', 'mp3', 'ogg', 'aac', 'wav']
+            if format_spec in extensions:
+                filter_f = lambda f: f['ext'] == format_spec
+            else:
+                filter_f = lambda f: f['format_id'] == format_spec
+            matches = list(filter(filter_f, available_formats))
+            if matches:
+                return matches[-1]
+        return None
+
+    def _calc_headers(self, info_dict):
+        res = std_headers.copy()
+
+        add_headers = info_dict.get('http_headers')
+        if add_headers:
+            res.update(add_headers)
+
+        cookies = self._calc_cookies(info_dict)
+        if cookies:
+            res['Cookie'] = cookies
+
+        return res
+
+    def _calc_cookies(self, info_dict):
+        pr = compat_urllib_request.Request(info_dict['url'])
+        self.cookiejar.add_cookie_header(pr)
+        return pr.get_header('Cookie')
+
+    def process_video_result(self, info_dict, download=True):
+        assert info_dict.get('_type', 'video') == 'video'
+
+        if 'id' not in info_dict:
+            raise ExtractorError('Missing "id" field in extractor result')
+        if 'title' not in info_dict:
+            raise ExtractorError('Missing "title" field in extractor result')
+
+        if 'playlist' not in info_dict:
+            # It isn't part of a playlist
+            info_dict['playlist'] = None
+            info_dict['playlist_index'] = None
+
+        thumbnails = info_dict.get('thumbnails')
+        if thumbnails is None:
+            thumbnail = info_dict.get('thumbnail')
+            if thumbnail:
+                info_dict['thumbnails'] = thumbnails = [{'url': thumbnail}]
+        if thumbnails:
+            thumbnails.sort(key=lambda t: (
+                t.get('preference'), t.get('width'), t.get('height'),
+                t.get('id'), t.get('url')))
+            for i, t in enumerate(thumbnails):
+                if t.get('width') and t.get('height'):
+                    t['resolution'] = '%dx%d' % (t['width'], t['height'])
+                if t.get('id') is None:
+                    t['id'] = '%d' % i
+
+        if thumbnails and 'thumbnail' not in info_dict:
+            info_dict['thumbnail'] = thumbnails[-1]['url']
+
+        if 'display_id' not in info_dict and 'id' in info_dict:
+            info_dict['display_id'] = info_dict['id']
+
+        if info_dict.get('upload_date') is None and info_dict.get('timestamp') is not None:
+            # Working around out-of-range timestamp values (e.g. negative ones on Windows,
+            # see http://bugs.python.org/issue1646728)
+            try:
+                upload_date = datetime.datetime.utcfromtimestamp(info_dict['timestamp'])
+                info_dict['upload_date'] = upload_date.strftime('%Y%m%d')
+            except (ValueError, OverflowError, OSError):
+                pass
+
+        if self.params.get('listsubtitles', False):
+            if 'automatic_captions' in info_dict:
+                self.list_subtitles(info_dict['id'], info_dict.get('automatic_captions'), 'automatic captions')
+            self.list_subtitles(info_dict['id'], info_dict.get('subtitles'), 'subtitles')
+            return
+        info_dict['requested_subtitles'] = self.process_subtitles(
+            info_dict['id'], info_dict.get('subtitles'),
+            info_dict.get('automatic_captions'))
+
+        # We now pick which formats have to be downloaded
+        if info_dict.get('formats') is None:
+            # There's only one format available
+            formats = [info_dict]
+        else:
+            formats = info_dict['formats']
+
+        if not formats:
+            raise ExtractorError('No video formats found!')
+
+        formats_dict = {}
+
+        # We check that all the formats have the format and format_id fields
+        for i, format in enumerate(formats):
+            if 'url' not in format:
+                raise ExtractorError('Missing "url" key in result (index %d)' % i)
+
+            if format.get('format_id') is None:
+                format['format_id'] = compat_str(i)
+            format_id = format['format_id']
+            if format_id not in formats_dict:
+                formats_dict[format_id] = []
+            formats_dict[format_id].append(format)
+
+        # Make sure all formats have unique format_id
+        for format_id, ambiguous_formats in formats_dict.items():
+            if len(ambiguous_formats) > 1:
+                for i, format in enumerate(ambiguous_formats):
+                    format['format_id'] = '%s-%d' % (format_id, i)
+
+        for i, format in enumerate(formats):
+            if format.get('format') is None:
+                format['format'] = '{id} - {res}{note}'.format(
+                    id=format['format_id'],
+                    res=self.format_resolution(format),
+                    note=' ({0})'.format(format['format_note']) if format.get('format_note') is not None else '',
+                )
+            # Automatically determine file extension if missing
+            if 'ext' not in format:
+                format['ext'] = determine_ext(format['url']).lower()
+            # Add HTTP headers, so that external programs can use them from the
+            # json output
+            full_format_info = info_dict.copy()
+            full_format_info.update(format)
+            format['http_headers'] = self._calc_headers(full_format_info)
+
+        # TODO Central sorting goes here
+
+        if formats[0] is not info_dict:
+            # only set the 'formats' fields if the original info_dict list them
+            # otherwise we end up with a circular reference, the first (and unique)
+            # element in the 'formats' field in info_dict is info_dict itself,
+            # wich can't be exported to json
+            info_dict['formats'] = formats
+        if self.params.get('listformats'):
+            self.list_formats(info_dict)
+            return
+        if self.params.get('list_thumbnails'):
+            self.list_thumbnails(info_dict)
+            return
+
+        req_format = self.params.get('format')
+        if req_format is None:
+            req_format_list = []
+            if (self.params.get('outtmpl', DEFAULT_OUTTMPL) != '-' and
+                    info_dict['extractor'] in ['youtube', 'ted']):
+                merger = FFmpegMergerPP(self)
+                if merger.available and merger.can_merge():
+                    req_format_list.append('bestvideo+bestaudio')
+            req_format_list.append('best')
+            req_format = '/'.join(req_format_list)
+        formats_to_download = []
+        if req_format == 'all':
+            formats_to_download = formats
+        else:
+            for rfstr in req_format.split(','):
+                # We can accept formats requested in the format: 34/5/best, we pick
+                # the first that is available, starting from left
+                req_formats = rfstr.split('/')
+                for rf in req_formats:
+                    if re.match(r'.+?\+.+?', rf) is not None:
+                        # Two formats have been requested like '137+139'
+                        format_1, format_2 = rf.split('+')
+                        formats_info = (self.select_format(format_1, formats),
+                                        self.select_format(format_2, formats))
+                        if all(formats_info):
+                            # The first format must contain the video and the
+                            # second the audio
+                            if formats_info[0].get('vcodec') == 'none':
+                                self.report_error('The first format must '
+                                                  'contain the video, try using '
+                                                  '"-f %s+%s"' % (format_2, format_1))
+                                return
+                            output_ext = (
+                                formats_info[0]['ext']
+                                if self.params.get('merge_output_format') is None
+                                else self.params['merge_output_format'])
+                            selected_format = {
+                                'requested_formats': formats_info,
+                                'format': '%s+%s' % (formats_info[0].get('format'),
+                                                     formats_info[1].get('format')),
+                                'format_id': '%s+%s' % (formats_info[0].get('format_id'),
+                                                        formats_info[1].get('format_id')),
+                                'width': formats_info[0].get('width'),
+                                'height': formats_info[0].get('height'),
+                                'resolution': formats_info[0].get('resolution'),
+                                'fps': formats_info[0].get('fps'),
+                                'vcodec': formats_info[0].get('vcodec'),
+                                'vbr': formats_info[0].get('vbr'),
+                                'stretched_ratio': formats_info[0].get('stretched_ratio'),
+                                'acodec': formats_info[1].get('acodec'),
+                                'abr': formats_info[1].get('abr'),
+                                'ext': output_ext,
+                            }
+                        else:
+                            selected_format = None
+                    else:
+                        selected_format = self.select_format(rf, formats)
+                    if selected_format is not None:
+                        formats_to_download.append(selected_format)
+                        break
+        if not formats_to_download:
+            raise ExtractorError('requested format not available',
+                                 expected=True)
+
+        if download:
+            if len(formats_to_download) > 1:
+                self.to_screen('[info] %s: downloading video in %s formats' % (info_dict['id'], len(formats_to_download)))
+            for format in formats_to_download:
+                new_info = dict(info_dict)
+                new_info.update(format)
+                self.process_info(new_info)
+        # We update the info dict with the best quality format (backwards compatibility)
+        info_dict.update(formats_to_download[-1])
+        return info_dict
+
+    def process_subtitles(self, video_id, normal_subtitles, automatic_captions):
+        """Select the requested subtitles and their format"""
+        available_subs = {}
+        if normal_subtitles and self.params.get('writesubtitles'):
+            available_subs.update(normal_subtitles)
+        if automatic_captions and self.params.get('writeautomaticsub'):
+            for lang, cap_info in automatic_captions.items():
+                if lang not in available_subs:
+                    available_subs[lang] = cap_info
+
+        if (not self.params.get('writesubtitles') and not
+                self.params.get('writeautomaticsub') or not
+                available_subs):
+            return None
+
+        if self.params.get('allsubtitles', False):
+            requested_langs = available_subs.keys()
+        else:
+            if self.params.get('subtitleslangs', False):
+                requested_langs = self.params.get('subtitleslangs')
+            elif 'en' in available_subs:
+                requested_langs = ['en']
+            else:
+                requested_langs = [list(available_subs.keys())[0]]
+
+        formats_query = self.params.get('subtitlesformat', 'best')
+        formats_preference = formats_query.split('/') if formats_query else []
+        subs = {}
+        for lang in requested_langs:
+            formats = available_subs.get(lang)
+            if formats is None:
+                self.report_warning('%s subtitles not available for %s' % (lang, video_id))
+                continue
+            for ext in formats_preference:
+                if ext == 'best':
+                    f = formats[-1]
+                    break
+                matches = list(filter(lambda f: f['ext'] == ext, formats))
+                if matches:
+                    f = matches[-1]
+                    break
+            else:
+                f = formats[-1]
+                self.report_warning(
+                    'No subtitle format found matching "%s" for language %s, '
+                    'using %s' % (formats_query, lang, f['ext']))
+            subs[lang] = f
+        return subs
+
+    def process_info(self, info_dict):
+        """Process a single resolved IE result."""
+
+        assert info_dict.get('_type', 'video') == 'video'
+
+        max_downloads = self.params.get('max_downloads')
+        if max_downloads is not None:
+            if self._num_downloads >= int(max_downloads):
+                raise MaxDownloadsReached()
+
+        info_dict['fulltitle'] = info_dict['title']
+        if len(info_dict['title']) > 200:
+            info_dict['title'] = info_dict['title'][:197] + '...'
+
+        if 'format' not in info_dict:
+            info_dict['format'] = info_dict['ext']
+
+        reason = self._match_entry(info_dict, incomplete=False)
+        if reason is not None:
+            self.to_screen('[download] ' + reason)
+            return
+
+        self._num_downloads += 1
+
+        info_dict['_filename'] = filename = self.prepare_filename(info_dict)
+
+        # Forced printings
+        if self.params.get('forcetitle', False):
+            self.to_stdout(info_dict['fulltitle'])
+        if self.params.get('forceid', False):
+            self.to_stdout(info_dict['id'])
+        if self.params.get('forceurl', False):
+            if info_dict.get('requested_formats') is not None:
+                for f in info_dict['requested_formats']:
+                    self.to_stdout(f['url'] + f.get('play_path', ''))
+            else:
+                # For RTMP URLs, also include the playpath
+                self.to_stdout(info_dict['url'] + info_dict.get('play_path', ''))
+        if self.params.get('forcethumbnail', False) and info_dict.get('thumbnail') is not None:
+            self.to_stdout(info_dict['thumbnail'])
+        if self.params.get('forcedescription', False) and info_dict.get('description') is not None:
+            self.to_stdout(info_dict['description'])
+        if self.params.get('forcefilename', False) and filename is not None:
+            self.to_stdout(filename)
+        if self.params.get('forceduration', False) and info_dict.get('duration') is not None:
+            self.to_stdout(formatSeconds(info_dict['duration']))
+        if self.params.get('forceformat', False):
+            self.to_stdout(info_dict['format'])
+        if self.params.get('forcejson', False):
+            self.to_stdout(json.dumps(info_dict))
+
+        # Do nothing else if in simulate mode
+        if self.params.get('simulate', False):
+            return
+
+        if filename is None:
+            return
+
+        try:
+            dn = os.path.dirname(sanitize_path(encodeFilename(filename)))
+            if dn and not os.path.exists(dn):
+                os.makedirs(dn)
+        except (OSError, IOError) as err:
+            self.report_error('unable to create directory ' + compat_str(err))
+            return
+
+        if self.params.get('writedescription', False):
+            descfn = replace_extension(filename, 'description', info_dict.get('ext'))
+            if self.params.get('nooverwrites', False) and os.path.exists(encodeFilename(descfn)):
+                self.to_screen('[info] Video description is already present')
+            elif info_dict.get('description') is None:
+                self.report_warning('There\'s no description to write.')
+            else:
+                try:
+                    self.to_screen('[info] Writing video description to: ' + descfn)
+                    with io.open(encodeFilename(descfn), 'w', encoding='utf-8') as descfile:
+                        descfile.write(info_dict['description'])
+                except (OSError, IOError):
+                    self.report_error('Cannot write description file ' + descfn)
+                    return
+
+        if self.params.get('writeannotations', False):
+            annofn = replace_extension(filename, 'annotations.xml', info_dict.get('ext'))
+            if self.params.get('nooverwrites', False) and os.path.exists(encodeFilename(annofn)):
+                self.to_screen('[info] Video annotations are already present')
+            else:
+                try:
+                    self.to_screen('[info] Writing video annotations to: ' + annofn)
+                    with io.open(encodeFilename(annofn), 'w', encoding='utf-8') as annofile:
+                        annofile.write(info_dict['annotations'])
+                except (KeyError, TypeError):
+                    self.report_warning('There are no annotations to write.')
+                except (OSError, IOError):
+                    self.report_error('Cannot write annotations file: ' + annofn)
+                    return
+
+        subtitles_are_requested = any([self.params.get('writesubtitles', False),
+                                       self.params.get('writeautomaticsub')])
+
+        if subtitles_are_requested and info_dict.get('requested_subtitles'):
+            # subtitles download errors are already managed as troubles in relevant IE
+            # that way it will silently go on when used with unsupporting IE
+            subtitles = info_dict['requested_subtitles']
+            ie = self.get_info_extractor(info_dict['extractor_key'])
+            for sub_lang, sub_info in subtitles.items():
+                sub_format = sub_info['ext']
+                if sub_info.get('data') is not None:
+                    sub_data = sub_info['data']
+                else:
+                    try:
+                        sub_data = ie._download_webpage(
+                            sub_info['url'], info_dict['id'], note=False)
+                    except ExtractorError as err:
+                        self.report_warning('Unable to download subtitle for "%s": %s' %
+                                            (sub_lang, compat_str(err.cause)))
+                        continue
+                try:
+                    sub_filename = subtitles_filename(filename, sub_lang, sub_format)
+                    if self.params.get('nooverwrites', False) and os.path.exists(encodeFilename(sub_filename)):
+                        self.to_screen('[info] Video subtitle %s.%s is already_present' % (sub_lang, sub_format))
+                    else:
+                        self.to_screen('[info] Writing video subtitles to: ' + sub_filename)
+                        with io.open(encodeFilename(sub_filename), 'w', encoding='utf-8') as subfile:
+                            subfile.write(sub_data)
+                except (OSError, IOError):
+                    self.report_error('Cannot write subtitles file ' + sub_filename)
+                    return
+
+        if self.params.get('writeinfojson', False):
+            infofn = replace_extension(filename, 'info.json', info_dict.get('ext'))
+            if self.params.get('nooverwrites', False) and os.path.exists(encodeFilename(infofn)):
+                self.to_screen('[info] Video description metadata is already present')
+            else:
+                self.to_screen('[info] Writing video description metadata as JSON to: ' + infofn)
+                try:
+                    write_json_file(self.filter_requested_info(info_dict), infofn)
+                except (OSError, IOError):
+                    self.report_error('Cannot write metadata to JSON file ' + infofn)
+                    return
+
+        self._write_thumbnails(info_dict, filename)
+
+        if not self.params.get('skip_download', False):
+            try:
+                def dl(name, info):
+                    fd = get_suitable_downloader(info, self.params)(self, self.params)
+                    for ph in self._progress_hooks:
+                        fd.add_progress_hook(ph)
+                    if self.params.get('verbose'):
+                        self.to_stdout('[debug] Invoking downloader on %r' % info.get('url'))
+                    return fd.download(name, info)
+
+                if info_dict.get('requested_formats') is not None:
+                    downloaded = []
+                    success = True
+                    merger = FFmpegMergerPP(self)
+                    if not merger.available:
+                        postprocessors = []
+                        self.report_warning('You have requested multiple '
+                                            'formats but ffmpeg or avconv are not installed.'
+                                            ' The formats won\'t be merged.')
+                    else:
+                        postprocessors = [merger]
+
+                    def compatible_formats(formats):
+                        video, audio = formats
+                        # Check extension
+                        video_ext, audio_ext = audio.get('ext'), video.get('ext')
+                        if video_ext and audio_ext:
+                            COMPATIBLE_EXTS = (
+                                ('mp3', 'mp4', 'm4a', 'm4p', 'm4b', 'm4r', 'm4v'),
+                                ('webm')
+                            )
+                            for exts in COMPATIBLE_EXTS:
+                                if video_ext in exts and audio_ext in exts:
+                                    return True
+                        # TODO: Check acodec/vcodec
+                        return False
+
+                    filename_real_ext = os.path.splitext(filename)[1][1:]
+                    filename_wo_ext = (
+                        os.path.splitext(filename)[0]
+                        if filename_real_ext == info_dict['ext']
+                        else filename)
+                    requested_formats = info_dict['requested_formats']
+                    if self.params.get('merge_output_format') is None and not compatible_formats(requested_formats):
+                        info_dict['ext'] = 'mkv'
+                        self.report_warning(
+                            'Requested formats are incompatible for merge and will be merged into mkv.')
+                    # Ensure filename always has a correct extension for successful merge
+                    filename = '%s.%s' % (filename_wo_ext, info_dict['ext'])
+                    if os.path.exists(encodeFilename(filename)):
+                        self.to_screen(
+                            '[download] %s has already been downloaded and '
+                            'merged' % filename)
+                    else:
+                        for f in requested_formats:
+                            new_info = dict(info_dict)
+                            new_info.update(f)
+                            fname = self.prepare_filename(new_info)
+                            fname = prepend_extension(fname, 'f%s' % f['format_id'], new_info['ext'])
+                            downloaded.append(fname)
+                            partial_success = dl(fname, new_info)
+                            success = success and partial_success
+                        info_dict['__postprocessors'] = postprocessors
+                        info_dict['__files_to_merge'] = downloaded
+                else:
+                    # Just a single file
+                    success = dl(filename, info_dict)
+            except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
+                self.report_error('unable to download video data: %s' % str(err))
+                return
+            except (OSError, IOError) as err:
+                raise UnavailableVideoError(err)
+            except (ContentTooShortError, ) as err:
+                self.report_error('content too short (expected %s bytes and served %s)' % (err.expected, err.downloaded))
+                return
+
+            if success:
+                # Fixup content
+                fixup_policy = self.params.get('fixup')
+                if fixup_policy is None:
+                    fixup_policy = 'detect_or_warn'
+
+                stretched_ratio = info_dict.get('stretched_ratio')
+                if stretched_ratio is not None and stretched_ratio != 1:
+                    if fixup_policy == 'warn':
+                        self.report_warning('%s: Non-uniform pixel ratio (%s)' % (
+                            info_dict['id'], stretched_ratio))
+                    elif fixup_policy == 'detect_or_warn':
+                        stretched_pp = FFmpegFixupStretchedPP(self)
+                        if stretched_pp.available:
+                            info_dict.setdefault('__postprocessors', [])
+                            info_dict['__postprocessors'].append(stretched_pp)
+                        else:
+                            self.report_warning(
+                                '%s: Non-uniform pixel ratio (%s). Install ffmpeg or avconv to fix this automatically.' % (
+                                    info_dict['id'], stretched_ratio))
+                    else:
+                        assert fixup_policy in ('ignore', 'never')
+
+                if info_dict.get('requested_formats') is None and info_dict.get('container') == 'm4a_dash':
+                    if fixup_policy == 'warn':
+                        self.report_warning('%s: writing DASH m4a. Only some players support this container.' % (
+                            info_dict['id']))
+                    elif fixup_policy == 'detect_or_warn':
+                        fixup_pp = FFmpegFixupM4aPP(self)
+                        if fixup_pp.available:
+                            info_dict.setdefault('__postprocessors', [])
+                            info_dict['__postprocessors'].append(fixup_pp)
+                        else:
+                            self.report_warning(
+                                '%s: writing DASH m4a. Only some players support this container. Install ffmpeg or avconv to fix this automatically.' % (
+                                    info_dict['id']))
+                    else:
+                        assert fixup_policy in ('ignore', 'never')
+
+                try:
+                    self.post_process(filename, info_dict)
+                except (PostProcessingError) as err:
+                    self.report_error('postprocessing: %s' % str(err))
+                    return
+                self.record_download_archive(info_dict)
+
+    def download(self, url_list):
+        """Download a given list of URLs."""
+        outtmpl = self.params.get('outtmpl', DEFAULT_OUTTMPL)
+        if (len(url_list) > 1 and
+                '%' not in outtmpl and
+                self.params.get('max_downloads') != 1):
+            raise SameFileError(outtmpl)
+
+        for url in url_list:
+            try:
+                # It also downloads the videos
+                res = self.extract_info(
+                    url, force_generic_extractor=self.params.get('force_generic_extractor', False))
+            except UnavailableVideoError:
+                self.report_error('unable to download video')
+            except MaxDownloadsReached:
+                self.to_screen('[info] Maximum number of downloaded files reached.')
+                raise
+            else:
+                if self.params.get('dump_single_json', False):
+                    self.to_stdout(json.dumps(res))
+
+        return self._download_retcode
+
+    def download_with_info_file(self, info_filename):
+        with contextlib.closing(fileinput.FileInput(
+                [info_filename], mode='r',
+                openhook=fileinput.hook_encoded('utf-8'))) as f:
+            # FileInput doesn't have a read method, we can't call json.load
+            info = self.filter_requested_info(json.loads('\n'.join(f)))
+        try:
+            self.process_ie_result(info, download=True)
+        except DownloadError:
+            webpage_url = info.get('webpage_url')
+            if webpage_url is not None:
+                self.report_warning('The info failed to download, trying with "%s"' % webpage_url)
+                return self.download([webpage_url])
+            else:
+                raise
+        return self._download_retcode
+
+    @staticmethod
+    def filter_requested_info(info_dict):
+        return dict(
+            (k, v) for k, v in info_dict.items()
+            if k not in ['requested_formats', 'requested_subtitles'])
+
+    def post_process(self, filename, ie_info):
+        """Run all the postprocessors on the given file."""
+        info = dict(ie_info)
+        info['filepath'] = filename
+        pps_chain = []
+        if ie_info.get('__postprocessors') is not None:
+            pps_chain.extend(ie_info['__postprocessors'])
+        pps_chain.extend(self._pps)
+        for pp in pps_chain:
+            files_to_delete = []
+            try:
+                files_to_delete, info = pp.run(info)
+            except PostProcessingError as e:
+                self.report_error(e.msg)
+            if files_to_delete and not self.params.get('keepvideo', False):
+                for old_filename in files_to_delete:
+                    self.to_screen('Deleting original file %s (pass -k to keep)' % old_filename)
+                    try:
+                        os.remove(encodeFilename(old_filename))
+                    except (IOError, OSError):
+                        self.report_warning('Unable to remove downloaded original file')
+
+    def _make_archive_id(self, info_dict):
+        # Future-proof against any change in case
+        # and backwards compatibility with prior versions
+        extractor = info_dict.get('extractor_key')
+        if extractor is None:
+            if 'id' in info_dict:
+                extractor = info_dict.get('ie_key')  # key in a playlist
+        if extractor is None:
+            return None  # Incomplete video information
+        return extractor.lower() + ' ' + info_dict['id']
+
+    def in_download_archive(self, info_dict):
+        fn = self.params.get('download_archive')
+        if fn is None:
+            return False
+
+        vid_id = self._make_archive_id(info_dict)
+        if vid_id is None:
+            return False  # Incomplete video information
+
+        try:
+            with locked_file(fn, 'r', encoding='utf-8') as archive_file:
+                for line in archive_file:
+                    if line.strip() == vid_id:
+                        return True
+        except IOError as ioe:
+            if ioe.errno != errno.ENOENT:
+                raise
+        return False
+
+    def record_download_archive(self, info_dict):
+        fn = self.params.get('download_archive')
+        if fn is None:
+            return
+        vid_id = self._make_archive_id(info_dict)
+        assert vid_id
+        with locked_file(fn, 'a', encoding='utf-8') as archive_file:
+            archive_file.write(vid_id + '\n')
+
+    @staticmethod
+    def format_resolution(format, default='unknown'):
+        if format.get('vcodec') == 'none':
+            return 'audio only'
+        if format.get('resolution') is not None:
+            return format['resolution']
+        if format.get('height') is not None:
+            if format.get('width') is not None:
+                res = '%sx%s' % (format['width'], format['height'])
+            else:
+                res = '%sp' % format['height']
+        elif format.get('width') is not None:
+            res = '?x%d' % format['width']
+        else:
+            res = default
+        return res
+
+    def _format_note(self, fdict):
+        res = ''
+        if fdict.get('ext') in ['f4f', 'f4m']:
+            res += '(unsupported) '
+        if fdict.get('format_note') is not None:
+            res += fdict['format_note'] + ' '
+        if fdict.get('tbr') is not None:
+            res += '%4dk ' % fdict['tbr']
+        if fdict.get('container') is not None:
+            if res:
+                res += ', '
+            res += '%s container' % fdict['container']
+        if (fdict.get('vcodec') is not None and
+                fdict.get('vcodec') != 'none'):
+            if res:
+                res += ', '
+            res += fdict['vcodec']
+            if fdict.get('vbr') is not None:
+                res += '@'
+        elif fdict.get('vbr') is not None and fdict.get('abr') is not None:
+            res += 'video@'
+        if fdict.get('vbr') is not None:
+            res += '%4dk' % fdict['vbr']
+        if fdict.get('fps') is not None:
+            res += ', %sfps' % fdict['fps']
+        if fdict.get('acodec') is not None:
+            if res:
+                res += ', '
+            if fdict['acodec'] == 'none':
+                res += 'video only'
+            else:
+                res += '%-5s' % fdict['acodec']
+        elif fdict.get('abr') is not None:
+            if res:
+                res += ', '
+            res += 'audio'
+        if fdict.get('abr') is not None:
+            res += '@%3dk' % fdict['abr']
+        if fdict.get('asr') is not None:
+            res += ' (%5dHz)' % fdict['asr']
+        if fdict.get('filesize') is not None:
+            if res:
+                res += ', '
+            res += format_bytes(fdict['filesize'])
+        elif fdict.get('filesize_approx') is not None:
+            if res:
+                res += ', '
+            res += '~' + format_bytes(fdict['filesize_approx'])
+        return res
+
+    def list_formats(self, info_dict):
+        formats = info_dict.get('formats', [info_dict])
+        table = [
+            [f['format_id'], f['ext'], self.format_resolution(f), self._format_note(f)]
+            for f in formats
+            if f.get('preference') is None or f['preference'] >= -1000]
+        if len(formats) > 1:
+            table[-1][-1] += (' ' if table[-1][-1] else '') + '(best)'
+
+        header_line = ['format code', 'extension', 'resolution', 'note']
+        self.to_screen(
+            '[info] Available formats for %s:\n%s' %
+            (info_dict['id'], render_table(header_line, table)))
+
+    def list_thumbnails(self, info_dict):
+        thumbnails = info_dict.get('thumbnails')
+        if not thumbnails:
+            tn_url = info_dict.get('thumbnail')
+            if tn_url:
+                thumbnails = [{'id': '0', 'url': tn_url}]
+            else:
+                self.to_screen(
+                    '[info] No thumbnails present for %s' % info_dict['id'])
+                return
+
+        self.to_screen(
+            '[info] Thumbnails for %s:' % info_dict['id'])
+        self.to_screen(render_table(
+            ['ID', 'width', 'height', 'URL'],
+            [[t['id'], t.get('width', 'unknown'), t.get('height', 'unknown'), t['url']] for t in thumbnails]))
+
+    def list_subtitles(self, video_id, subtitles, name='subtitles'):
+        if not subtitles:
+            self.to_screen('%s has no %s' % (video_id, name))
+            return
+        self.to_screen(
+            'Available %s for %s:' % (name, video_id))
+        self.to_screen(render_table(
+            ['Language', 'formats'],
+            [[lang, ', '.join(f['ext'] for f in reversed(formats))]
+                for lang, formats in subtitles.items()]))
+
+    def urlopen(self, req):
+        """ Start an HTTP download """
+
+        # According to RFC 3986, URLs can not contain non-ASCII characters, however this is not
+        # always respected by websites, some tend to give out URLs with non percent-encoded
+        # non-ASCII characters (see telemb.py, ard.py [#3412])
+        # urllib chokes on URLs with non-ASCII characters (see http://bugs.python.org/issue3991)
+        # To work around aforementioned issue we will replace request's original URL with
+        # percent-encoded one
+        req_is_string = isinstance(req, compat_basestring)
+        url = req if req_is_string else req.get_full_url()
+        url_escaped = escape_url(url)
+
+        # Substitute URL if any change after escaping
+        if url != url_escaped:
+            if req_is_string:
+                req = url_escaped
+            else:
+                req_type = HEADRequest if req.get_method() == 'HEAD' else compat_urllib_request.Request
+                req = req_type(
+                    url_escaped, data=req.data, headers=req.headers,
+                    origin_req_host=req.origin_req_host, unverifiable=req.unverifiable)
+
+        return self._opener.open(req, timeout=self._socket_timeout)
+
+    def print_debug_header(self):
+        if not self.params.get('verbose'):
+            return
+
+        if type('') is not compat_str:
+            # Python 2.6 on SLES11 SP1 (https://github.com/rg3/youtube-dl/issues/3326)
+            self.report_warning(
+                'Your Python is broken! Update to a newer and supported version')
+
+        stdout_encoding = getattr(
+            sys.stdout, 'encoding', 'missing (%s)' % type(sys.stdout).__name__)
+        encoding_str = (
+            '[debug] Encodings: locale %s, fs %s, out %s, pref %s\n' % (
+                locale.getpreferredencoding(),
+                sys.getfilesystemencoding(),
+                stdout_encoding,
+                self.get_encoding()))
+        write_string(encoding_str, encoding=None)
+
+        self._write_string('[debug] youtube-dl version ' + __version__ + '\n')
+        try:
+            sp = subprocess.Popen(
+                ['git', 'rev-parse', '--short', 'HEAD'],
+                stdout=subprocess.PIPE, stderr=subprocess.PIPE,
+                cwd=os.path.dirname(os.path.abspath(__file__)))
+            out, err = sp.communicate()
+            out = out.decode().strip()
+            if re.match('[0-9a-f]+', out):
+                self._write_string('[debug] Git HEAD: ' + out + '\n')
+        except Exception:
+            try:
+                sys.exc_clear()
+            except Exception:
+                pass
+        self._write_string('[debug] Python version %s - %s\n' % (
+            platform.python_version(), platform_name()))
+
+        exe_versions = FFmpegPostProcessor.get_versions(self)
+        exe_versions['rtmpdump'] = rtmpdump_version()
+        exe_str = ', '.join(
+            '%s %s' % (exe, v)
+            for exe, v in sorted(exe_versions.items())
+            if v
+        )
+        if not exe_str:
+            exe_str = 'none'
+        self._write_string('[debug] exe versions: %s\n' % exe_str)
+
+        proxy_map = {}
+        for handler in self._opener.handlers:
+            if hasattr(handler, 'proxies'):
+                proxy_map.update(handler.proxies)
+        self._write_string('[debug] Proxy map: ' + compat_str(proxy_map) + '\n')
+
+        if self.params.get('call_home', False):
+            ipaddr = self.urlopen('https://yt-dl.org/ip').read().decode('utf-8')
+            self._write_string('[debug] Public IP address: %s\n' % ipaddr)
+            latest_version = self.urlopen(
+                'https://yt-dl.org/latest/version').read().decode('utf-8')
+            if version_tuple(latest_version) > version_tuple(__version__):
+                self.report_warning(
+                    'You are using an outdated version (newest version: %s)! '
+                    'See https://yt-dl.org/update if you need help updating.' %
+                    latest_version)
+
+    def _setup_opener(self):
+        timeout_val = self.params.get('socket_timeout')
+        self._socket_timeout = 600 if timeout_val is None else float(timeout_val)
+
+        opts_cookiefile = self.params.get('cookiefile')
+        opts_proxy = self.params.get('proxy')
+
+        if opts_cookiefile is None:
+            self.cookiejar = compat_cookiejar.CookieJar()
+        else:
+            self.cookiejar = compat_cookiejar.MozillaCookieJar(
+                opts_cookiefile)
+            if os.access(opts_cookiefile, os.R_OK):
+                self.cookiejar.load()
+
+        cookie_processor = compat_urllib_request.HTTPCookieProcessor(
+            self.cookiejar)
+        if opts_proxy is not None:
+            if opts_proxy == '':
+                proxies = {}
+            else:
+                proxies = {'http': opts_proxy, 'https': opts_proxy}
+        else:
+            proxies = compat_urllib_request.getproxies()
+            # Set HTTPS proxy to HTTP one if given (https://github.com/rg3/youtube-dl/issues/805)
+            if 'http' in proxies and 'https' not in proxies:
+                proxies['https'] = proxies['http']
+        proxy_handler = PerRequestProxyHandler(proxies)
+
+        debuglevel = 1 if self.params.get('debug_printtraffic') else 0
+        https_handler = make_HTTPS_handler(self.params, debuglevel=debuglevel)
+        ydlh = YoutubeDLHandler(self.params, debuglevel=debuglevel)
+        opener = compat_urllib_request.build_opener(
+            proxy_handler, https_handler, cookie_processor, ydlh)
+
+        # Delete the default user-agent header, which would otherwise apply in
+        # cases where our custom HTTP handler doesn't come into play
+        # (See https://github.com/rg3/youtube-dl/issues/1309 for details)
+        opener.addheaders = []
+        self._opener = opener
+
+    def encode(self, s):
+        if isinstance(s, bytes):
+            return s  # Already encoded
+
+        try:
+            return s.encode(self.get_encoding())
+        except UnicodeEncodeError as err:
+            err.reason = err.reason + '. Check your system encoding configuration or use the --encoding option.'
+            raise
+
+    def get_encoding(self):
+        encoding = self.params.get('encoding')
+        if encoding is None:
+            encoding = preferredencoding()
+        return encoding
+
+    def _write_thumbnails(self, info_dict, filename):
+        if self.params.get('writethumbnail', False):
+            thumbnails = info_dict.get('thumbnails')
+            if thumbnails:
+                thumbnails = [thumbnails[-1]]
+        elif self.params.get('write_all_thumbnails', False):
+            thumbnails = info_dict.get('thumbnails')
+        else:
+            return
+
+        if not thumbnails:
+            # No thumbnails present, so return immediately
+            return
+
+        for t in thumbnails:
+            thumb_ext = determine_ext(t['url'], 'jpg')
+            suffix = '_%s' % t['id'] if len(thumbnails) > 1 else ''
+            thumb_display_id = '%s ' % t['id'] if len(thumbnails) > 1 else ''
+            t['filename'] = thumb_filename = os.path.splitext(filename)[0] + suffix + '.' + thumb_ext
+
+            if self.params.get('nooverwrites', False) and os.path.exists(encodeFilename(thumb_filename)):
+                self.to_screen('[%s] %s: Thumbnail %sis already present' %
+                               (info_dict['extractor'], info_dict['id'], thumb_display_id))
+            else:
+                self.to_screen('[%s] %s: Downloading thumbnail %s...' %
+                               (info_dict['extractor'], info_dict['id'], thumb_display_id))
+                try:
+                    uf = self.urlopen(t['url'])
+                    with open(thumb_filename, 'wb') as thumbf:
+                        shutil.copyfileobj(uf, thumbf)
+                    self.to_screen('[%s] %s: Writing thumbnail %sto: %s' %
+                                   (info_dict['extractor'], info_dict['id'], thumb_display_id, thumb_filename))
+                except (compat_urllib_error.URLError, compat_http_client.HTTPException, socket.error) as err:
+                    self.report_warning('Unable to download thumbnail "%s": %s' %
+                                        (t['url'], compat_str(err)))

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/__init__.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/__init__.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/__init__.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,418 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+
+from __future__ import unicode_literals
+
+__license__ = 'Public Domain'
+
+import codecs
+import io
+import os
+import random
+import shlex
+import sys
+
+
+from .options import (
+    parseOpts,
+)
+from .compat import (
+    compat_expanduser,
+    compat_getpass,
+    compat_print,
+    workaround_optparse_bug9161,
+)
+from .utils import (
+    DateRange,
+    decodeOption,
+    DEFAULT_OUTTMPL,
+    DownloadError,
+    match_filter_func,
+    MaxDownloadsReached,
+    preferredencoding,
+    read_batch_urls,
+    SameFileError,
+    setproctitle,
+    std_headers,
+    write_string,
+)
+from .update import update_self
+from .downloader import (
+    FileDownloader,
+)
+from .extractor import gen_extractors, list_extractors
+from .YoutubeDL import YoutubeDL
+
+
+def _real_main(argv=None):
+    # Compatibility fixes for Windows
+    if sys.platform == 'win32':
+        # https://github.com/rg3/youtube-dl/issues/820
+        codecs.register(lambda name: codecs.lookup('utf-8') if name == 'cp65001' else None)
+
+    workaround_optparse_bug9161()
+
+    setproctitle('youtube-dl')
+
+    parser, opts, args = parseOpts(argv)
+
+    # Set user agent
+    if opts.user_agent is not None:
+        std_headers['User-Agent'] = opts.user_agent
+
+    # Set referer
+    if opts.referer is not None:
+        std_headers['Referer'] = opts.referer
+
+    # Custom HTTP headers
+    if opts.headers is not None:
+        for h in opts.headers:
+            if h.find(':', 1) < 0:
+                parser.error('wrong header formatting, it should be key:value, not "%s"' % h)
+            key, value = h.split(':', 2)
+            if opts.verbose:
+                write_string('[debug] Adding header from command line option %s:%s\n' % (key, value))
+            std_headers[key] = value
+
+    # Dump user agent
+    if opts.dump_user_agent:
+        compat_print(std_headers['User-Agent'])
+        sys.exit(0)
+
+    # Batch file verification
+    batch_urls = []
+    if opts.batchfile is not None:
+        try:
+            if opts.batchfile == '-':
+                batchfd = sys.stdin
+            else:
+                batchfd = io.open(opts.batchfile, 'r', encoding='utf-8', errors='ignore')
+            batch_urls = read_batch_urls(batchfd)
+            if opts.verbose:
+                write_string('[debug] Batch file urls: ' + repr(batch_urls) + '\n')
+        except IOError:
+            sys.exit('ERROR: batch file could not be read')
+    all_urls = batch_urls + args
+    all_urls = [url.strip() for url in all_urls]
+    _enc = preferredencoding()
+    all_urls = [url.decode(_enc, 'ignore') if isinstance(url, bytes) else url for url in all_urls]
+
+    if opts.list_extractors:
+        for ie in list_extractors(opts.age_limit):
+            compat_print(ie.IE_NAME + (' (CURRENTLY BROKEN)' if not ie._WORKING else ''))
+            matchedUrls = [url for url in all_urls if ie.suitable(url)]
+            for mu in matchedUrls:
+                compat_print('  ' + mu)
+        sys.exit(0)
+    if opts.list_extractor_descriptions:
+        for ie in list_extractors(opts.age_limit):
+            if not ie._WORKING:
+                continue
+            desc = getattr(ie, 'IE_DESC', ie.IE_NAME)
+            if desc is False:
+                continue
+            if hasattr(ie, 'SEARCH_KEY'):
+                _SEARCHES = ('cute kittens', 'slithering pythons', 'falling cat', 'angry poodle', 'purple fish', 'running tortoise', 'sleeping bunny', 'burping cow')
+                _COUNTS = ('', '5', '10', 'all')
+                desc += ' (Example: "%s%s:%s" )' % (ie.SEARCH_KEY, random.choice(_COUNTS), random.choice(_SEARCHES))
+            compat_print(desc)
+        sys.exit(0)
+
+    # Conflicting, missing and erroneous options
+    if opts.usenetrc and (opts.username is not None or opts.password is not None):
+        parser.error('using .netrc conflicts with giving username/password')
+    if opts.password is not None and opts.username is None:
+        parser.error('account username missing\n')
+    if opts.outtmpl is not None and (opts.usetitle or opts.autonumber or opts.useid):
+        parser.error('using output template conflicts with using title, video ID or auto number')
+    if opts.usetitle and opts.useid:
+        parser.error('using title conflicts with using video ID')
+    if opts.username is not None and opts.password is None:
+        opts.password = compat_getpass('Type account password and press [Return]: ')
+    if opts.ratelimit is not None:
+        numeric_limit = FileDownloader.parse_bytes(opts.ratelimit)
+        if numeric_limit is None:
+            parser.error('invalid rate limit specified')
+        opts.ratelimit = numeric_limit
+    if opts.min_filesize is not None:
+        numeric_limit = FileDownloader.parse_bytes(opts.min_filesize)
+        if numeric_limit is None:
+            parser.error('invalid min_filesize specified')
+        opts.min_filesize = numeric_limit
+    if opts.max_filesize is not None:
+        numeric_limit = FileDownloader.parse_bytes(opts.max_filesize)
+        if numeric_limit is None:
+            parser.error('invalid max_filesize specified')
+        opts.max_filesize = numeric_limit
+    if opts.retries is not None:
+        if opts.retries in ('inf', 'infinite'):
+            opts_retries = float('inf')
+        else:
+            try:
+                opts_retries = int(opts.retries)
+            except (TypeError, ValueError):
+                parser.error('invalid retry count specified')
+    if opts.buffersize is not None:
+        numeric_buffersize = FileDownloader.parse_bytes(opts.buffersize)
+        if numeric_buffersize is None:
+            parser.error('invalid buffer size specified')
+        opts.buffersize = numeric_buffersize
+    if opts.playliststart <= 0:
+        raise ValueError('Playlist start must be positive')
+    if opts.playlistend not in (-1, None) and opts.playlistend < opts.playliststart:
+        raise ValueError('Playlist end must be greater than playlist start')
+    if opts.extractaudio:
+        if opts.audioformat not in ['best', 'aac', 'mp3', 'm4a', 'opus', 'vorbis', 'wav']:
+            parser.error('invalid audio format specified')
+    if opts.audioquality:
+        opts.audioquality = opts.audioquality.strip('k').strip('K')
+        if not opts.audioquality.isdigit():
+            parser.error('invalid audio quality specified')
+    if opts.recodevideo is not None:
+        if opts.recodevideo not in ['mp4', 'flv', 'webm', 'ogg', 'mkv', 'avi']:
+            parser.error('invalid video recode format specified')
+    if opts.convertsubtitles is not None:
+        if opts.convertsubtitles not in ['srt', 'vtt', 'ass']:
+            parser.error('invalid subtitle format specified')
+
+    if opts.date is not None:
+        date = DateRange.day(opts.date)
+    else:
+        date = DateRange(opts.dateafter, opts.datebefore)
+
+    # Do not download videos when there are audio-only formats
+    if opts.extractaudio and not opts.keepvideo and opts.format is None:
+        opts.format = 'bestaudio/best'
+
+    # --all-sub automatically sets --write-sub if --write-auto-sub is not given
+    # this was the old behaviour if only --all-sub was given.
+    if opts.allsubtitles and not opts.writeautomaticsub:
+        opts.writesubtitles = True
+
+    outtmpl = ((opts.outtmpl is not None and opts.outtmpl) or
+               (opts.format == '-1' and opts.usetitle and '%(title)s-%(id)s-%(format)s.%(ext)s') or
+               (opts.format == '-1' and '%(id)s-%(format)s.%(ext)s') or
+               (opts.usetitle and opts.autonumber and '%(autonumber)s-%(title)s-%(id)s.%(ext)s') or
+               (opts.usetitle and '%(title)s-%(id)s.%(ext)s') or
+               (opts.useid and '%(id)s.%(ext)s') or
+               (opts.autonumber and '%(autonumber)s-%(id)s.%(ext)s') or
+               DEFAULT_OUTTMPL)
+    if not os.path.splitext(outtmpl)[1] and opts.extractaudio:
+        parser.error('Cannot download a video and extract audio into the same'
+                     ' file! Use "{0}.%(ext)s" instead of "{0}" as the output'
+                     ' template'.format(outtmpl))
+
+    any_getting = opts.geturl or opts.gettitle or opts.getid or opts.getthumbnail or opts.getdescription or opts.getfilename or opts.getformat or opts.getduration or opts.dumpjson or opts.dump_single_json
+    any_printing = opts.print_json
+    download_archive_fn = compat_expanduser(opts.download_archive) if opts.download_archive is not None else opts.download_archive
+
+    # PostProcessors
+    postprocessors = []
+    # Add the metadata pp first, the other pps will copy it
+    if opts.metafromtitle:
+        postprocessors.append({
+            'key': 'MetadataFromTitle',
+            'titleformat': opts.metafromtitle
+        })
+    if opts.addmetadata:
+        postprocessors.append({'key': 'FFmpegMetadata'})
+    if opts.extractaudio:
+        postprocessors.append({
+            'key': 'FFmpegExtractAudio',
+            'preferredcodec': opts.audioformat,
+            'preferredquality': opts.audioquality,
+            'nopostoverwrites': opts.nopostoverwrites,
+        })
+    if opts.recodevideo:
+        postprocessors.append({
+            'key': 'FFmpegVideoConvertor',
+            'preferedformat': opts.recodevideo,
+        })
+    if opts.convertsubtitles:
+        postprocessors.append({
+            'key': 'FFmpegSubtitlesConvertor',
+            'format': opts.convertsubtitles,
+        })
+    if opts.embedsubtitles:
+        postprocessors.append({
+            'key': 'FFmpegEmbedSubtitle',
+        })
+    if opts.xattrs:
+        postprocessors.append({'key': 'XAttrMetadata'})
+    if opts.embedthumbnail:
+        already_have_thumbnail = opts.writethumbnail or opts.write_all_thumbnails
+        postprocessors.append({
+            'key': 'EmbedThumbnail',
+            'already_have_thumbnail': already_have_thumbnail
+        })
+        if not already_have_thumbnail:
+            opts.writethumbnail = True
+    # Please keep ExecAfterDownload towards the bottom as it allows the user to modify the final file in any way.
+    # So if the user is able to remove the file before your postprocessor runs it might cause a few problems.
+    if opts.exec_cmd:
+        postprocessors.append({
+            'key': 'ExecAfterDownload',
+            'exec_cmd': opts.exec_cmd,
+        })
+    if opts.xattr_set_filesize:
+        try:
+            import xattr
+            xattr  # Confuse flake8
+        except ImportError:
+            parser.error('setting filesize xattr requested but python-xattr is not available')
+    external_downloader_args = None
+    if opts.external_downloader_args:
+        external_downloader_args = shlex.split(opts.external_downloader_args)
+    postprocessor_args = None
+    if opts.postprocessor_args:
+        postprocessor_args = shlex.split(opts.postprocessor_args)
+    match_filter = (
+        None if opts.match_filter is None
+        else match_filter_func(opts.match_filter))
+
+    ydl_opts = {
+        'usenetrc': opts.usenetrc,
+        'username': opts.username,
+        'password': opts.password,
+        'twofactor': opts.twofactor,
+        'videopassword': opts.videopassword,
+        'quiet': (opts.quiet or any_getting or any_printing),
+        'no_warnings': opts.no_warnings,
+        'forceurl': opts.geturl,
+        'forcetitle': opts.gettitle,
+        'forceid': opts.getid,
+        'forcethumbnail': opts.getthumbnail,
+        'forcedescription': opts.getdescription,
+        'forceduration': opts.getduration,
+        'forcefilename': opts.getfilename,
+        'forceformat': opts.getformat,
+        'forcejson': opts.dumpjson or opts.print_json,
+        'dump_single_json': opts.dump_single_json,
+        'simulate': opts.simulate or any_getting,
+        'skip_download': opts.skip_download,
+        'format': opts.format,
+        'listformats': opts.listformats,
+        'outtmpl': outtmpl,
+        'autonumber_size': opts.autonumber_size,
+        'restrictfilenames': opts.restrictfilenames,
+        'ignoreerrors': opts.ignoreerrors,
+        'force_generic_extractor': opts.force_generic_extractor,
+        'ratelimit': opts.ratelimit,
+        'nooverwrites': opts.nooverwrites,
+        'retries': opts_retries,
+        'buffersize': opts.buffersize,
+        'noresizebuffer': opts.noresizebuffer,
+        'continuedl': opts.continue_dl,
+        'noprogress': opts.noprogress,
+        'progress_with_newline': opts.progress_with_newline,
+        'playliststart': opts.playliststart,
+        'playlistend': opts.playlistend,
+        'playlistreverse': opts.playlist_reverse,
+        'noplaylist': opts.noplaylist,
+        'logtostderr': opts.outtmpl == '-',
+        'consoletitle': opts.consoletitle,
+        'nopart': opts.nopart,
+        'updatetime': opts.updatetime,
+        'writedescription': opts.writedescription,
+        'writeannotations': opts.writeannotations,
+        'writeinfojson': opts.writeinfojson,
+        'writethumbnail': opts.writethumbnail,
+        'write_all_thumbnails': opts.write_all_thumbnails,
+        'writesubtitles': opts.writesubtitles,
+        'writeautomaticsub': opts.writeautomaticsub,
+        'allsubtitles': opts.allsubtitles,
+        'listsubtitles': opts.listsubtitles,
+        'subtitlesformat': opts.subtitlesformat,
+        'subtitleslangs': opts.subtitleslangs,
+        'matchtitle': decodeOption(opts.matchtitle),
+        'rejecttitle': decodeOption(opts.rejecttitle),
+        'max_downloads': opts.max_downloads,
+        'prefer_free_formats': opts.prefer_free_formats,
+        'verbose': opts.verbose,
+        'dump_intermediate_pages': opts.dump_intermediate_pages,
+        'write_pages': opts.write_pages,
+        'test': opts.test,
+        'keepvideo': opts.keepvideo,
+        'min_filesize': opts.min_filesize,
+        'max_filesize': opts.max_filesize,
+        'min_views': opts.min_views,
+        'max_views': opts.max_views,
+        'daterange': date,
+        'cachedir': opts.cachedir,
+        'youtube_print_sig_code': opts.youtube_print_sig_code,
+        'age_limit': opts.age_limit,
+        'download_archive': download_archive_fn,
+        'cookiefile': opts.cookiefile,
+        'nocheckcertificate': opts.no_check_certificate,
+        'prefer_insecure': opts.prefer_insecure,
+        'proxy': opts.proxy,
+        'socket_timeout': opts.socket_timeout,
+        'bidi_workaround': opts.bidi_workaround,
+        'debug_printtraffic': opts.debug_printtraffic,
+        'prefer_ffmpeg': opts.prefer_ffmpeg,
+        'include_ads': opts.include_ads,
+        'default_search': opts.default_search,
+        'youtube_include_dash_manifest': opts.youtube_include_dash_manifest,
+        'encoding': opts.encoding,
+        'extract_flat': opts.extract_flat,
+        'merge_output_format': opts.merge_output_format,
+        'postprocessors': postprocessors,
+        'fixup': opts.fixup,
+        'source_address': opts.source_address,
+        'call_home': opts.call_home,
+        'sleep_interval': opts.sleep_interval,
+        'external_downloader': opts.external_downloader,
+        'list_thumbnails': opts.list_thumbnails,
+        'playlist_items': opts.playlist_items,
+        'xattr_set_filesize': opts.xattr_set_filesize,
+        'match_filter': match_filter,
+        'no_color': opts.no_color,
+        'ffmpeg_location': opts.ffmpeg_location,
+        'hls_prefer_native': opts.hls_prefer_native,
+        'external_downloader_args': external_downloader_args,
+        'postprocessor_args': postprocessor_args,
+        'cn_verification_proxy': opts.cn_verification_proxy,
+    }
+
+    with YoutubeDL(ydl_opts) as ydl:
+        # Update version
+        if opts.update_self:
+            update_self(ydl.to_screen, opts.verbose)
+
+        # Remove cache dir
+        if opts.rm_cachedir:
+            ydl.cache.remove()
+
+        # Maybe do nothing
+        if (len(all_urls) < 1) and (opts.load_info_filename is None):
+            if opts.update_self or opts.rm_cachedir:
+                sys.exit()
+
+            ydl.warn_if_short_id(sys.argv[1:] if argv is None else argv)
+            parser.error(
+                'You must provide at least one URL.\n'
+                'Type youtube-dl --help to see a list of all options.')
+
+        try:
+            if opts.load_info_filename is not None:
+                retcode = ydl.download_with_info_file(opts.load_info_filename)
+            else:
+                retcode = ydl.download(all_urls)
+        except MaxDownloadsReached:
+            ydl.to_screen('--max-download limit reached, aborting.')
+            retcode = 101
+
+    sys.exit(retcode)
+
+
+def main(argv=None):
+    try:
+        _real_main(argv)
+    except DownloadError:
+        sys.exit(1)
+    except SameFileError:
+        sys.exit('ERROR: fixed output name but more than one file to download')
+    except KeyboardInterrupt:
+        sys.exit('\nERROR: Interrupted by user')
+
+__all__ = ['main', 'YoutubeDL', 'gen_extractors', 'list_extractors']

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/__main__.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/__main__.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/__main__.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,19 @@
+#!/usr/bin/env python
+from __future__ import unicode_literals
+
+# Execute with
+# $ python youtube_dl/__main__.py (2.6+)
+# $ python -m youtube_dl          (2.7+)
+
+import sys
+
+if __package__ is None and not hasattr(sys, "frozen"):
+    # direct call of __main__.py
+    import os.path
+    path = os.path.realpath(os.path.abspath(__file__))
+    sys.path.append(os.path.dirname(os.path.dirname(path)))
+
+import youtube_dl
+
+if __name__ == '__main__':
+    youtube_dl.main()

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/aes.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/aes.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/aes.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,331 @@
+from __future__ import unicode_literals
+
+import base64
+from math import ceil
+
+from .utils import bytes_to_intlist, intlist_to_bytes
+
+BLOCK_SIZE_BYTES = 16
+
+
+def aes_ctr_decrypt(data, key, counter):
+    """
+    Decrypt with aes in counter mode
+
+    @param {int[]} data        cipher
+    @param {int[]} key         16/24/32-Byte cipher key
+    @param {instance} counter  Instance whose next_value function (@returns {int[]}  16-Byte block)
+                               returns the next counter block
+    @returns {int[]}           decrypted data
+    """
+    expanded_key = key_expansion(key)
+    block_count = int(ceil(float(len(data)) / BLOCK_SIZE_BYTES))
+
+    decrypted_data = []
+    for i in range(block_count):
+        counter_block = counter.next_value()
+        block = data[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES]
+        block += [0] * (BLOCK_SIZE_BYTES - len(block))
+
+        cipher_counter_block = aes_encrypt(counter_block, expanded_key)
+        decrypted_data += xor(block, cipher_counter_block)
+    decrypted_data = decrypted_data[:len(data)]
+
+    return decrypted_data
+
+
+def aes_cbc_decrypt(data, key, iv):
+    """
+    Decrypt with aes in CBC mode
+
+    @param {int[]} data        cipher
+    @param {int[]} key         16/24/32-Byte cipher key
+    @param {int[]} iv          16-Byte IV
+    @returns {int[]}           decrypted data
+    """
+    expanded_key = key_expansion(key)
+    block_count = int(ceil(float(len(data)) / BLOCK_SIZE_BYTES))
+
+    decrypted_data = []
+    previous_cipher_block = iv
+    for i in range(block_count):
+        block = data[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES]
+        block += [0] * (BLOCK_SIZE_BYTES - len(block))
+
+        decrypted_block = aes_decrypt(block, expanded_key)
+        decrypted_data += xor(decrypted_block, previous_cipher_block)
+        previous_cipher_block = block
+    decrypted_data = decrypted_data[:len(data)]
+
+    return decrypted_data
+
+
+def key_expansion(data):
+    """
+    Generate key schedule
+
+    @param {int[]} data  16/24/32-Byte cipher key
+    @returns {int[]}     176/208/240-Byte expanded key
+    """
+    data = data[:]  # copy
+    rcon_iteration = 1
+    key_size_bytes = len(data)
+    expanded_key_size_bytes = (key_size_bytes // 4 + 7) * BLOCK_SIZE_BYTES
+
+    while len(data) < expanded_key_size_bytes:
+        temp = data[-4:]
+        temp = key_schedule_core(temp, rcon_iteration)
+        rcon_iteration += 1
+        data += xor(temp, data[-key_size_bytes: 4 - key_size_bytes])
+
+        for _ in range(3):
+            temp = data[-4:]
+            data += xor(temp, data[-key_size_bytes: 4 - key_size_bytes])
+
+        if key_size_bytes == 32:
+            temp = data[-4:]
+            temp = sub_bytes(temp)
+            data += xor(temp, data[-key_size_bytes: 4 - key_size_bytes])
+
+        for _ in range(3 if key_size_bytes == 32 else 2 if key_size_bytes == 24 else 0):
+            temp = data[-4:]
+            data += xor(temp, data[-key_size_bytes: 4 - key_size_bytes])
+    data = data[:expanded_key_size_bytes]
+
+    return data
+
+
+def aes_encrypt(data, expanded_key):
+    """
+    Encrypt one block with aes
+
+    @param {int[]} data          16-Byte state
+    @param {int[]} expanded_key  176/208/240-Byte expanded key
+    @returns {int[]}             16-Byte cipher
+    """
+    rounds = len(expanded_key) // BLOCK_SIZE_BYTES - 1
+
+    data = xor(data, expanded_key[:BLOCK_SIZE_BYTES])
+    for i in range(1, rounds + 1):
+        data = sub_bytes(data)
+        data = shift_rows(data)
+        if i != rounds:
+            data = mix_columns(data)
+        data = xor(data, expanded_key[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES])
+
+    return data
+
+
+def aes_decrypt(data, expanded_key):
+    """
+    Decrypt one block with aes
+
+    @param {int[]} data          16-Byte cipher
+    @param {int[]} expanded_key  176/208/240-Byte expanded key
+    @returns {int[]}             16-Byte state
+    """
+    rounds = len(expanded_key) // BLOCK_SIZE_BYTES - 1
+
+    for i in range(rounds, 0, -1):
+        data = xor(data, expanded_key[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES])
+        if i != rounds:
+            data = mix_columns_inv(data)
+        data = shift_rows_inv(data)
+        data = sub_bytes_inv(data)
+    data = xor(data, expanded_key[:BLOCK_SIZE_BYTES])
+
+    return data
+
+
+def aes_decrypt_text(data, password, key_size_bytes):
+    """
+    Decrypt text
+    - The first 8 Bytes of decoded 'data' are the 8 high Bytes of the counter
+    - The cipher key is retrieved by encrypting the first 16 Byte of 'password'
+      with the first 'key_size_bytes' Bytes from 'password' (if necessary filled with 0's)
+    - Mode of operation is 'counter'
+
+    @param {str} data                    Base64 encoded string
+    @param {str,unicode} password        Password (will be encoded with utf-8)
+    @param {int} key_size_bytes          Possible values: 16 for 128-Bit, 24 for 192-Bit or 32 for 256-Bit
+    @returns {str}                       Decrypted data
+    """
+    NONCE_LENGTH_BYTES = 8
+
+    data = bytes_to_intlist(base64.b64decode(data.encode('utf-8')))
+    password = bytes_to_intlist(password.encode('utf-8'))
+
+    key = password[:key_size_bytes] + [0] * (key_size_bytes - len(password))
+    key = aes_encrypt(key[:BLOCK_SIZE_BYTES], key_expansion(key)) * (key_size_bytes // BLOCK_SIZE_BYTES)
+
+    nonce = data[:NONCE_LENGTH_BYTES]
+    cipher = data[NONCE_LENGTH_BYTES:]
+
+    class Counter:
+        __value = nonce + [0] * (BLOCK_SIZE_BYTES - NONCE_LENGTH_BYTES)
+
+        def next_value(self):
+            temp = self.__value
+            self.__value = inc(self.__value)
+            return temp
+
+    decrypted_data = aes_ctr_decrypt(cipher, key, Counter())
+    plaintext = intlist_to_bytes(decrypted_data)
+
+    return plaintext
+
+RCON = (0x8d, 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80, 0x1b, 0x36)
+SBOX = (0x63, 0x7C, 0x77, 0x7B, 0xF2, 0x6B, 0x6F, 0xC5, 0x30, 0x01, 0x67, 0x2B, 0xFE, 0xD7, 0xAB, 0x76,
+        0xCA, 0x82, 0xC9, 0x7D, 0xFA, 0x59, 0x47, 0xF0, 0xAD, 0xD4, 0xA2, 0xAF, 0x9C, 0xA4, 0x72, 0xC0,
+        0xB7, 0xFD, 0x93, 0x26, 0x36, 0x3F, 0xF7, 0xCC, 0x34, 0xA5, 0xE5, 0xF1, 0x71, 0xD8, 0x31, 0x15,
+        0x04, 0xC7, 0x23, 0xC3, 0x18, 0x96, 0x05, 0x9A, 0x07, 0x12, 0x80, 0xE2, 0xEB, 0x27, 0xB2, 0x75,
+        0x09, 0x83, 0x2C, 0x1A, 0x1B, 0x6E, 0x5A, 0xA0, 0x52, 0x3B, 0xD6, 0xB3, 0x29, 0xE3, 0x2F, 0x84,
+        0x53, 0xD1, 0x00, 0xED, 0x20, 0xFC, 0xB1, 0x5B, 0x6A, 0xCB, 0xBE, 0x39, 0x4A, 0x4C, 0x58, 0xCF,
+        0xD0, 0xEF, 0xAA, 0xFB, 0x43, 0x4D, 0x33, 0x85, 0x45, 0xF9, 0x02, 0x7F, 0x50, 0x3C, 0x9F, 0xA8,
+        0x51, 0xA3, 0x40, 0x8F, 0x92, 0x9D, 0x38, 0xF5, 0xBC, 0xB6, 0xDA, 0x21, 0x10, 0xFF, 0xF3, 0xD2,
+        0xCD, 0x0C, 0x13, 0xEC, 0x5F, 0x97, 0x44, 0x17, 0xC4, 0xA7, 0x7E, 0x3D, 0x64, 0x5D, 0x19, 0x73,
+        0x60, 0x81, 0x4F, 0xDC, 0x22, 0x2A, 0x90, 0x88, 0x46, 0xEE, 0xB8, 0x14, 0xDE, 0x5E, 0x0B, 0xDB,
+        0xE0, 0x32, 0x3A, 0x0A, 0x49, 0x06, 0x24, 0x5C, 0xC2, 0xD3, 0xAC, 0x62, 0x91, 0x95, 0xE4, 0x79,
+        0xE7, 0xC8, 0x37, 0x6D, 0x8D, 0xD5, 0x4E, 0xA9, 0x6C, 0x56, 0xF4, 0xEA, 0x65, 0x7A, 0xAE, 0x08,
+        0xBA, 0x78, 0x25, 0x2E, 0x1C, 0xA6, 0xB4, 0xC6, 0xE8, 0xDD, 0x74, 0x1F, 0x4B, 0xBD, 0x8B, 0x8A,
+        0x70, 0x3E, 0xB5, 0x66, 0x48, 0x03, 0xF6, 0x0E, 0x61, 0x35, 0x57, 0xB9, 0x86, 0xC1, 0x1D, 0x9E,
+        0xE1, 0xF8, 0x98, 0x11, 0x69, 0xD9, 0x8E, 0x94, 0x9B, 0x1E, 0x87, 0xE9, 0xCE, 0x55, 0x28, 0xDF,
+        0x8C, 0xA1, 0x89, 0x0D, 0xBF, 0xE6, 0x42, 0x68, 0x41, 0x99, 0x2D, 0x0F, 0xB0, 0x54, 0xBB, 0x16)
+SBOX_INV = (0x52, 0x09, 0x6a, 0xd5, 0x30, 0x36, 0xa5, 0x38, 0xbf, 0x40, 0xa3, 0x9e, 0x81, 0xf3, 0xd7, 0xfb,
+            0x7c, 0xe3, 0x39, 0x82, 0x9b, 0x2f, 0xff, 0x87, 0x34, 0x8e, 0x43, 0x44, 0xc4, 0xde, 0xe9, 0xcb,
+            0x54, 0x7b, 0x94, 0x32, 0xa6, 0xc2, 0x23, 0x3d, 0xee, 0x4c, 0x95, 0x0b, 0x42, 0xfa, 0xc3, 0x4e,
+            0x08, 0x2e, 0xa1, 0x66, 0x28, 0xd9, 0x24, 0xb2, 0x76, 0x5b, 0xa2, 0x49, 0x6d, 0x8b, 0xd1, 0x25,
+            0x72, 0xf8, 0xf6, 0x64, 0x86, 0x68, 0x98, 0x16, 0xd4, 0xa4, 0x5c, 0xcc, 0x5d, 0x65, 0xb6, 0x92,
+            0x6c, 0x70, 0x48, 0x50, 0xfd, 0xed, 0xb9, 0xda, 0x5e, 0x15, 0x46, 0x57, 0xa7, 0x8d, 0x9d, 0x84,
+            0x90, 0xd8, 0xab, 0x00, 0x8c, 0xbc, 0xd3, 0x0a, 0xf7, 0xe4, 0x58, 0x05, 0xb8, 0xb3, 0x45, 0x06,
+            0xd0, 0x2c, 0x1e, 0x8f, 0xca, 0x3f, 0x0f, 0x02, 0xc1, 0xaf, 0xbd, 0x03, 0x01, 0x13, 0x8a, 0x6b,
+            0x3a, 0x91, 0x11, 0x41, 0x4f, 0x67, 0xdc, 0xea, 0x97, 0xf2, 0xcf, 0xce, 0xf0, 0xb4, 0xe6, 0x73,
+            0x96, 0xac, 0x74, 0x22, 0xe7, 0xad, 0x35, 0x85, 0xe2, 0xf9, 0x37, 0xe8, 0x1c, 0x75, 0xdf, 0x6e,
+            0x47, 0xf1, 0x1a, 0x71, 0x1d, 0x29, 0xc5, 0x89, 0x6f, 0xb7, 0x62, 0x0e, 0xaa, 0x18, 0xbe, 0x1b,
+            0xfc, 0x56, 0x3e, 0x4b, 0xc6, 0xd2, 0x79, 0x20, 0x9a, 0xdb, 0xc0, 0xfe, 0x78, 0xcd, 0x5a, 0xf4,
+            0x1f, 0xdd, 0xa8, 0x33, 0x88, 0x07, 0xc7, 0x31, 0xb1, 0x12, 0x10, 0x59, 0x27, 0x80, 0xec, 0x5f,
+            0x60, 0x51, 0x7f, 0xa9, 0x19, 0xb5, 0x4a, 0x0d, 0x2d, 0xe5, 0x7a, 0x9f, 0x93, 0xc9, 0x9c, 0xef,
+            0xa0, 0xe0, 0x3b, 0x4d, 0xae, 0x2a, 0xf5, 0xb0, 0xc8, 0xeb, 0xbb, 0x3c, 0x83, 0x53, 0x99, 0x61,
+            0x17, 0x2b, 0x04, 0x7e, 0xba, 0x77, 0xd6, 0x26, 0xe1, 0x69, 0x14, 0x63, 0x55, 0x21, 0x0c, 0x7d)
+MIX_COLUMN_MATRIX = ((0x2, 0x3, 0x1, 0x1),
+                     (0x1, 0x2, 0x3, 0x1),
+                     (0x1, 0x1, 0x2, 0x3),
+                     (0x3, 0x1, 0x1, 0x2))
+MIX_COLUMN_MATRIX_INV = ((0xE, 0xB, 0xD, 0x9),
+                         (0x9, 0xE, 0xB, 0xD),
+                         (0xD, 0x9, 0xE, 0xB),
+                         (0xB, 0xD, 0x9, 0xE))
+RIJNDAEL_EXP_TABLE = (0x01, 0x03, 0x05, 0x0F, 0x11, 0x33, 0x55, 0xFF, 0x1A, 0x2E, 0x72, 0x96, 0xA1, 0xF8, 0x13, 0x35,
+                      0x5F, 0xE1, 0x38, 0x48, 0xD8, 0x73, 0x95, 0xA4, 0xF7, 0x02, 0x06, 0x0A, 0x1E, 0x22, 0x66, 0xAA,
+                      0xE5, 0x34, 0x5C, 0xE4, 0x37, 0x59, 0xEB, 0x26, 0x6A, 0xBE, 0xD9, 0x70, 0x90, 0xAB, 0xE6, 0x31,
+                      0x53, 0xF5, 0x04, 0x0C, 0x14, 0x3C, 0x44, 0xCC, 0x4F, 0xD1, 0x68, 0xB8, 0xD3, 0x6E, 0xB2, 0xCD,
+                      0x4C, 0xD4, 0x67, 0xA9, 0xE0, 0x3B, 0x4D, 0xD7, 0x62, 0xA6, 0xF1, 0x08, 0x18, 0x28, 0x78, 0x88,
+                      0x83, 0x9E, 0xB9, 0xD0, 0x6B, 0xBD, 0xDC, 0x7F, 0x81, 0x98, 0xB3, 0xCE, 0x49, 0xDB, 0x76, 0x9A,
+                      0xB5, 0xC4, 0x57, 0xF9, 0x10, 0x30, 0x50, 0xF0, 0x0B, 0x1D, 0x27, 0x69, 0xBB, 0xD6, 0x61, 0xA3,
+                      0xFE, 0x19, 0x2B, 0x7D, 0x87, 0x92, 0xAD, 0xEC, 0x2F, 0x71, 0x93, 0xAE, 0xE9, 0x20, 0x60, 0xA0,
+                      0xFB, 0x16, 0x3A, 0x4E, 0xD2, 0x6D, 0xB7, 0xC2, 0x5D, 0xE7, 0x32, 0x56, 0xFA, 0x15, 0x3F, 0x41,
+                      0xC3, 0x5E, 0xE2, 0x3D, 0x47, 0xC9, 0x40, 0xC0, 0x5B, 0xED, 0x2C, 0x74, 0x9C, 0xBF, 0xDA, 0x75,
+                      0x9F, 0xBA, 0xD5, 0x64, 0xAC, 0xEF, 0x2A, 0x7E, 0x82, 0x9D, 0xBC, 0xDF, 0x7A, 0x8E, 0x89, 0x80,
+                      0x9B, 0xB6, 0xC1, 0x58, 0xE8, 0x23, 0x65, 0xAF, 0xEA, 0x25, 0x6F, 0xB1, 0xC8, 0x43, 0xC5, 0x54,
+                      0xFC, 0x1F, 0x21, 0x63, 0xA5, 0xF4, 0x07, 0x09, 0x1B, 0x2D, 0x77, 0x99, 0xB0, 0xCB, 0x46, 0xCA,
+                      0x45, 0xCF, 0x4A, 0xDE, 0x79, 0x8B, 0x86, 0x91, 0xA8, 0xE3, 0x3E, 0x42, 0xC6, 0x51, 0xF3, 0x0E,
+                      0x12, 0x36, 0x5A, 0xEE, 0x29, 0x7B, 0x8D, 0x8C, 0x8F, 0x8A, 0x85, 0x94, 0xA7, 0xF2, 0x0D, 0x17,
+                      0x39, 0x4B, 0xDD, 0x7C, 0x84, 0x97, 0xA2, 0xFD, 0x1C, 0x24, 0x6C, 0xB4, 0xC7, 0x52, 0xF6, 0x01)
+RIJNDAEL_LOG_TABLE = (0x00, 0x00, 0x19, 0x01, 0x32, 0x02, 0x1a, 0xc6, 0x4b, 0xc7, 0x1b, 0x68, 0x33, 0xee, 0xdf, 0x03,
+                      0x64, 0x04, 0xe0, 0x0e, 0x34, 0x8d, 0x81, 0xef, 0x4c, 0x71, 0x08, 0xc8, 0xf8, 0x69, 0x1c, 0xc1,
+                      0x7d, 0xc2, 0x1d, 0xb5, 0xf9, 0xb9, 0x27, 0x6a, 0x4d, 0xe4, 0xa6, 0x72, 0x9a, 0xc9, 0x09, 0x78,
+                      0x65, 0x2f, 0x8a, 0x05, 0x21, 0x0f, 0xe1, 0x24, 0x12, 0xf0, 0x82, 0x45, 0x35, 0x93, 0xda, 0x8e,
+                      0x96, 0x8f, 0xdb, 0xbd, 0x36, 0xd0, 0xce, 0x94, 0x13, 0x5c, 0xd2, 0xf1, 0x40, 0x46, 0x83, 0x38,
+                      0x66, 0xdd, 0xfd, 0x30, 0xbf, 0x06, 0x8b, 0x62, 0xb3, 0x25, 0xe2, 0x98, 0x22, 0x88, 0x91, 0x10,
+                      0x7e, 0x6e, 0x48, 0xc3, 0xa3, 0xb6, 0x1e, 0x42, 0x3a, 0x6b, 0x28, 0x54, 0xfa, 0x85, 0x3d, 0xba,
+                      0x2b, 0x79, 0x0a, 0x15, 0x9b, 0x9f, 0x5e, 0xca, 0x4e, 0xd4, 0xac, 0xe5, 0xf3, 0x73, 0xa7, 0x57,
+                      0xaf, 0x58, 0xa8, 0x50, 0xf4, 0xea, 0xd6, 0x74, 0x4f, 0xae, 0xe9, 0xd5, 0xe7, 0xe6, 0xad, 0xe8,
+                      0x2c, 0xd7, 0x75, 0x7a, 0xeb, 0x16, 0x0b, 0xf5, 0x59, 0xcb, 0x5f, 0xb0, 0x9c, 0xa9, 0x51, 0xa0,
+                      0x7f, 0x0c, 0xf6, 0x6f, 0x17, 0xc4, 0x49, 0xec, 0xd8, 0x43, 0x1f, 0x2d, 0xa4, 0x76, 0x7b, 0xb7,
+                      0xcc, 0xbb, 0x3e, 0x5a, 0xfb, 0x60, 0xb1, 0x86, 0x3b, 0x52, 0xa1, 0x6c, 0xaa, 0x55, 0x29, 0x9d,
+                      0x97, 0xb2, 0x87, 0x90, 0x61, 0xbe, 0xdc, 0xfc, 0xbc, 0x95, 0xcf, 0xcd, 0x37, 0x3f, 0x5b, 0xd1,
+                      0x53, 0x39, 0x84, 0x3c, 0x41, 0xa2, 0x6d, 0x47, 0x14, 0x2a, 0x9e, 0x5d, 0x56, 0xf2, 0xd3, 0xab,
+                      0x44, 0x11, 0x92, 0xd9, 0x23, 0x20, 0x2e, 0x89, 0xb4, 0x7c, 0xb8, 0x26, 0x77, 0x99, 0xe3, 0xa5,
+                      0x67, 0x4a, 0xed, 0xde, 0xc5, 0x31, 0xfe, 0x18, 0x0d, 0x63, 0x8c, 0x80, 0xc0, 0xf7, 0x70, 0x07)
+
+
+def sub_bytes(data):
+    return [SBOX[x] for x in data]
+
+
+def sub_bytes_inv(data):
+    return [SBOX_INV[x] for x in data]
+
+
+def rotate(data):
+    return data[1:] + [data[0]]
+
+
+def key_schedule_core(data, rcon_iteration):
+    data = rotate(data)
+    data = sub_bytes(data)
+    data[0] = data[0] ^ RCON[rcon_iteration]
+
+    return data
+
+
+def xor(data1, data2):
+    return [x ^ y for x, y in zip(data1, data2)]
+
+
+def rijndael_mul(a, b):
+    if(a == 0 or b == 0):
+        return 0
+    return RIJNDAEL_EXP_TABLE[(RIJNDAEL_LOG_TABLE[a] + RIJNDAEL_LOG_TABLE[b]) % 0xFF]
+
+
+def mix_column(data, matrix):
+    data_mixed = []
+    for row in range(4):
+        mixed = 0
+        for column in range(4):
+            # xor is (+) and (-)
+            mixed ^= rijndael_mul(data[column], matrix[row][column])
+        data_mixed.append(mixed)
+    return data_mixed
+
+
+def mix_columns(data, matrix=MIX_COLUMN_MATRIX):
+    data_mixed = []
+    for i in range(4):
+        column = data[i * 4: (i + 1) * 4]
+        data_mixed += mix_column(column, matrix)
+    return data_mixed
+
+
+def mix_columns_inv(data):
+    return mix_columns(data, MIX_COLUMN_MATRIX_INV)
+
+
+def shift_rows(data):
+    data_shifted = []
+    for column in range(4):
+        for row in range(4):
+            data_shifted.append(data[((column + row) & 0b11) * 4 + row])
+    return data_shifted
+
+
+def shift_rows_inv(data):
+    data_shifted = []
+    for column in range(4):
+        for row in range(4):
+            data_shifted.append(data[((column - row) & 0b11) * 4 + row])
+    return data_shifted
+
+
+def inc(data):
+    data = data[:]  # copy
+    for i in range(len(data) - 1, -1, -1):
+        if data[i] == 255:
+            data[i] = 0
+        else:
+            data[i] = data[i] + 1
+            break
+    return data
+
+__all__ = ['aes_encrypt', 'key_expansion', 'aes_ctr_decrypt', 'aes_cbc_decrypt', 'aes_decrypt_text']

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/cache.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/cache.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/cache.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,93 @@
+from __future__ import unicode_literals
+
+import errno
+import io
+import json
+import os
+import re
+import shutil
+import traceback
+
+from .compat import compat_expanduser, compat_getenv
+from .utils import write_json_file
+
+
+class Cache(object):
+    def __init__(self, ydl):
+        self._ydl = ydl
+
+    def _get_root_dir(self):
+        res = self._ydl.params.get('cachedir')
+        if res is None:
+            cache_root = compat_getenv('XDG_CACHE_HOME', '~/.cache')
+            res = os.path.join(cache_root, 'youtube-dl')
+        return compat_expanduser(res)
+
+    def _get_cache_fn(self, section, key, dtype):
+        assert re.match(r'^[a-zA-Z0-9_.-]+$', section), \
+            'invalid section %r' % section
+        assert re.match(r'^[a-zA-Z0-9_.-]+$', key), 'invalid key %r' % key
+        return os.path.join(
+            self._get_root_dir(), section, '%s.%s' % (key, dtype))
+
+    @property
+    def enabled(self):
+        return self._ydl.params.get('cachedir') is not False
+
+    def store(self, section, key, data, dtype='json'):
+        assert dtype in ('json',)
+
+        if not self.enabled:
+            return
+
+        fn = self._get_cache_fn(section, key, dtype)
+        try:
+            try:
+                os.makedirs(os.path.dirname(fn))
+            except OSError as ose:
+                if ose.errno != errno.EEXIST:
+                    raise
+            write_json_file(data, fn)
+        except Exception:
+            tb = traceback.format_exc()
+            self._ydl.report_warning(
+                'Writing cache to %r failed: %s' % (fn, tb))
+
+    def load(self, section, key, dtype='json', default=None):
+        assert dtype in ('json',)
+
+        if not self.enabled:
+            return default
+
+        cache_fn = self._get_cache_fn(section, key, dtype)
+        try:
+            try:
+                with io.open(cache_fn, 'r', encoding='utf-8') as cachef:
+                    return json.load(cachef)
+            except ValueError:
+                try:
+                    file_size = os.path.getsize(cache_fn)
+                except (OSError, IOError) as oe:
+                    file_size = str(oe)
+                self._ydl.report_warning(
+                    'Cache retrieval from %s failed (%s)' % (cache_fn, file_size))
+        except IOError:
+            pass  # No cache available
+
+        return default
+
+    def remove(self):
+        if not self.enabled:
+            self._ydl.to_screen('Cache is disabled (Did you combine --no-cache-dir and --rm-cache-dir?)')
+            return
+
+        cachedir = self._get_root_dir()
+        if not any((term in cachedir) for term in ('cache', 'tmp')):
+            raise Exception('Not removing directory %s - this does not look like a cache dir' % cachedir)
+
+        self._ydl.to_screen(
+            'Removing cache dir %s .' % cachedir, skip_eol=True)
+        if os.path.exists(cachedir):
+            self._ydl.to_screen('.', skip_eol=True)
+            shutil.rmtree(cachedir)
+        self._ydl.to_screen('.')

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/compat.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/compat.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/compat.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,466 @@
+from __future__ import unicode_literals
+
+import collections
+import getpass
+import optparse
+import os
+import re
+import shutil
+import socket
+import subprocess
+import sys
+import itertools
+
+
+try:
+    import urllib.request as compat_urllib_request
+except ImportError:  # Python 2
+    import urllib2 as compat_urllib_request
+
+try:
+    import urllib.error as compat_urllib_error
+except ImportError:  # Python 2
+    import urllib2 as compat_urllib_error
+
+try:
+    import urllib.parse as compat_urllib_parse
+except ImportError:  # Python 2
+    import urllib as compat_urllib_parse
+
+try:
+    from urllib.parse import urlparse as compat_urllib_parse_urlparse
+except ImportError:  # Python 2
+    from urlparse import urlparse as compat_urllib_parse_urlparse
+
+try:
+    import urllib.parse as compat_urlparse
+except ImportError:  # Python 2
+    import urlparse as compat_urlparse
+
+try:
+    import http.cookiejar as compat_cookiejar
+except ImportError:  # Python 2
+    import cookielib as compat_cookiejar
+
+try:
+    import html.entities as compat_html_entities
+except ImportError:  # Python 2
+    import htmlentitydefs as compat_html_entities
+
+try:
+    import http.client as compat_http_client
+except ImportError:  # Python 2
+    import httplib as compat_http_client
+
+try:
+    from urllib.error import HTTPError as compat_HTTPError
+except ImportError:  # Python 2
+    from urllib2 import HTTPError as compat_HTTPError
+
+try:
+    from urllib.request import urlretrieve as compat_urlretrieve
+except ImportError:  # Python 2
+    from urllib import urlretrieve as compat_urlretrieve
+
+
+try:
+    from subprocess import DEVNULL
+    compat_subprocess_get_DEVNULL = lambda: DEVNULL
+except ImportError:
+    compat_subprocess_get_DEVNULL = lambda: open(os.path.devnull, 'w')
+
+try:
+    import http.server as compat_http_server
+except ImportError:
+    import BaseHTTPServer as compat_http_server
+
+try:
+    from urllib.parse import unquote_to_bytes as compat_urllib_parse_unquote_to_bytes
+    from urllib.parse import unquote as compat_urllib_parse_unquote
+    from urllib.parse import unquote_plus as compat_urllib_parse_unquote_plus
+except ImportError:  # Python 2
+    _asciire = re.compile('([\x00-\x7f]+)') if sys.version_info < (2, 7) else compat_urllib_parse._asciire
+
+    # HACK: The following are the correct unquote_to_bytes, unquote and unquote_plus
+    # implementations from cpython 3.4.3's stdlib. Python 2's version
+    # is apparently broken (see https://github.com/rg3/youtube-dl/pull/6244)
+
+    def compat_urllib_parse_unquote_to_bytes(string):
+        """unquote_to_bytes('abc%20def') -> b'abc def'."""
+        # Note: strings are encoded as UTF-8. This is only an issue if it contains
+        # unescaped non-ASCII characters, which URIs should not.
+        if not string:
+            # Is it a string-like object?
+            string.split
+            return b''
+        if isinstance(string, unicode):
+            string = string.encode('utf-8')
+        bits = string.split(b'%')
+        if len(bits) == 1:
+            return string
+        res = [bits[0]]
+        append = res.append
+        for item in bits[1:]:
+            try:
+                append(compat_urllib_parse._hextochr[item[:2]])
+                append(item[2:])
+            except KeyError:
+                append(b'%')
+                append(item)
+        return b''.join(res)
+
+    def compat_urllib_parse_unquote(string, encoding='utf-8', errors='replace'):
+        """Replace %xx escapes by their single-character equivalent. The optional
+        encoding and errors parameters specify how to decode percent-encoded
+        sequences into Unicode characters, as accepted by the bytes.decode()
+        method.
+        By default, percent-encoded sequences are decoded with UTF-8, and invalid
+        sequences are replaced by a placeholder character.
+
+        unquote('abc%20def') -> 'abc def'.
+        """
+        if '%' not in string:
+            string.split
+            return string
+        if encoding is None:
+            encoding = 'utf-8'
+        if errors is None:
+            errors = 'replace'
+        bits = _asciire.split(string)
+        res = [bits[0]]
+        append = res.append
+        for i in range(1, len(bits), 2):
+            append(compat_urllib_parse_unquote_to_bytes(bits[i]).decode(encoding, errors))
+            append(bits[i + 1])
+        return ''.join(res)
+
+    def compat_urllib_parse_unquote_plus(string, encoding='utf-8', errors='replace'):
+        """Like unquote(), but also replace plus signs by spaces, as required for
+        unquoting HTML form values.
+
+        unquote_plus('%7e/abc+def') -> '~/abc def'
+        """
+        string = string.replace('+', ' ')
+        return compat_urllib_parse_unquote(string, encoding, errors)
+
+try:
+    compat_str = unicode  # Python 2
+except NameError:
+    compat_str = str
+
+try:
+    compat_basestring = basestring  # Python 2
+except NameError:
+    compat_basestring = str
+
+try:
+    compat_chr = unichr  # Python 2
+except NameError:
+    compat_chr = chr
+
+try:
+    from xml.etree.ElementTree import ParseError as compat_xml_parse_error
+except ImportError:  # Python 2.6
+    from xml.parsers.expat import ExpatError as compat_xml_parse_error
+
+
+try:
+    from urllib.parse import parse_qs as compat_parse_qs
+except ImportError:  # Python 2
+    # HACK: The following is the correct parse_qs implementation from cpython 3's stdlib.
+    # Python 2's version is apparently totally broken
+
+    def _parse_qsl(qs, keep_blank_values=False, strict_parsing=False,
+                   encoding='utf-8', errors='replace'):
+        qs, _coerce_result = qs, compat_str
+        pairs = [s2 for s1 in qs.split('&') for s2 in s1.split(';')]
+        r = []
+        for name_value in pairs:
+            if not name_value and not strict_parsing:
+                continue
+            nv = name_value.split('=', 1)
+            if len(nv) != 2:
+                if strict_parsing:
+                    raise ValueError("bad query field: %r" % (name_value,))
+                # Handle case of a control-name with no equal sign
+                if keep_blank_values:
+                    nv.append('')
+                else:
+                    continue
+            if len(nv[1]) or keep_blank_values:
+                name = nv[0].replace('+', ' ')
+                name = compat_urllib_parse_unquote(
+                    name, encoding=encoding, errors=errors)
+                name = _coerce_result(name)
+                value = nv[1].replace('+', ' ')
+                value = compat_urllib_parse_unquote(
+                    value, encoding=encoding, errors=errors)
+                value = _coerce_result(value)
+                r.append((name, value))
+        return r
+
+    def compat_parse_qs(qs, keep_blank_values=False, strict_parsing=False,
+                        encoding='utf-8', errors='replace'):
+        parsed_result = {}
+        pairs = _parse_qsl(qs, keep_blank_values, strict_parsing,
+                           encoding=encoding, errors=errors)
+        for name, value in pairs:
+            if name in parsed_result:
+                parsed_result[name].append(value)
+            else:
+                parsed_result[name] = [value]
+        return parsed_result
+
+try:
+    from shlex import quote as shlex_quote
+except ImportError:  # Python < 3.3
+    def shlex_quote(s):
+        if re.match(r'^[-_\w./]+$', s):
+            return s
+        else:
+            return "'" + s.replace("'", "'\"'\"'") + "'"
+
+
+def compat_ord(c):
+    if type(c) is int:
+        return c
+    else:
+        return ord(c)
+
+
+if sys.version_info >= (3, 0):
+    compat_getenv = os.getenv
+    compat_expanduser = os.path.expanduser
+else:
+    # Environment variables should be decoded with filesystem encoding.
+    # Otherwise it will fail if any non-ASCII characters present (see #3854 #3217 #2918)
+
+    def compat_getenv(key, default=None):
+        from .utils import get_filesystem_encoding
+        env = os.getenv(key, default)
+        if env:
+            env = env.decode(get_filesystem_encoding())
+        return env
+
+    # HACK: The default implementations of os.path.expanduser from cpython do not decode
+    # environment variables with filesystem encoding. We will work around this by
+    # providing adjusted implementations.
+    # The following are os.path.expanduser implementations from cpython 2.7.8 stdlib
+    # for different platforms with correct environment variables decoding.
+
+    if os.name == 'posix':
+        def compat_expanduser(path):
+            """Expand ~ and ~user constructions.  If user or $HOME is unknown,
+            do nothing."""
+            if not path.startswith('~'):
+                return path
+            i = path.find('/', 1)
+            if i < 0:
+                i = len(path)
+            if i == 1:
+                if 'HOME' not in os.environ:
+                    import pwd
+                    userhome = pwd.getpwuid(os.getuid()).pw_dir
+                else:
+                    userhome = compat_getenv('HOME')
+            else:
+                import pwd
+                try:
+                    pwent = pwd.getpwnam(path[1:i])
+                except KeyError:
+                    return path
+                userhome = pwent.pw_dir
+            userhome = userhome.rstrip('/')
+            return (userhome + path[i:]) or '/'
+    elif os.name == 'nt' or os.name == 'ce':
+        def compat_expanduser(path):
+            """Expand ~ and ~user constructs.
+
+            If user or $HOME is unknown, do nothing."""
+            if path[:1] != '~':
+                return path
+            i, n = 1, len(path)
+            while i < n and path[i] not in '/\\':
+                i = i + 1
+
+            if 'HOME' in os.environ:
+                userhome = compat_getenv('HOME')
+            elif 'USERPROFILE' in os.environ:
+                userhome = compat_getenv('USERPROFILE')
+            elif 'HOMEPATH' not in os.environ:
+                return path
+            else:
+                try:
+                    drive = compat_getenv('HOMEDRIVE')
+                except KeyError:
+                    drive = ''
+                userhome = os.path.join(drive, compat_getenv('HOMEPATH'))
+
+            if i != 1:  # ~user
+                userhome = os.path.join(os.path.dirname(userhome), path[1:i])
+
+            return userhome + path[i:]
+    else:
+        compat_expanduser = os.path.expanduser
+
+
+if sys.version_info < (3, 0):
+    def compat_print(s):
+        from .utils import preferredencoding
+        print(s.encode(preferredencoding(), 'xmlcharrefreplace'))
+else:
+    def compat_print(s):
+        assert isinstance(s, compat_str)
+        print(s)
+
+
+try:
+    subprocess_check_output = subprocess.check_output
+except AttributeError:
+    def subprocess_check_output(*args, **kwargs):
+        assert 'input' not in kwargs
+        p = subprocess.Popen(*args, stdout=subprocess.PIPE, **kwargs)
+        output, _ = p.communicate()
+        ret = p.poll()
+        if ret:
+            raise subprocess.CalledProcessError(ret, p.args, output=output)
+        return output
+
+if sys.version_info < (3, 0) and sys.platform == 'win32':
+    def compat_getpass(prompt, *args, **kwargs):
+        if isinstance(prompt, compat_str):
+            from .utils import preferredencoding
+            prompt = prompt.encode(preferredencoding())
+        return getpass.getpass(prompt, *args, **kwargs)
+else:
+    compat_getpass = getpass.getpass
+
+# Old 2.6 and 2.7 releases require kwargs to be bytes
+try:
+    def _testfunc(x):
+        pass
+    _testfunc(**{'x': 0})
+except TypeError:
+    def compat_kwargs(kwargs):
+        return dict((bytes(k), v) for k, v in kwargs.items())
+else:
+    compat_kwargs = lambda kwargs: kwargs
+
+
+if sys.version_info < (2, 7):
+    def compat_socket_create_connection(address, timeout, source_address=None):
+        host, port = address
+        err = None
+        for res in socket.getaddrinfo(host, port, 0, socket.SOCK_STREAM):
+            af, socktype, proto, canonname, sa = res
+            sock = None
+            try:
+                sock = socket.socket(af, socktype, proto)
+                sock.settimeout(timeout)
+                if source_address:
+                    sock.bind(source_address)
+                sock.connect(sa)
+                return sock
+            except socket.error as _:
+                err = _
+                if sock is not None:
+                    sock.close()
+        if err is not None:
+            raise err
+        else:
+            raise socket.error("getaddrinfo returns an empty list")
+else:
+    compat_socket_create_connection = socket.create_connection
+
+
+# Fix https://github.com/rg3/youtube-dl/issues/4223
+# See http://bugs.python.org/issue9161 for what is broken
+def workaround_optparse_bug9161():
+    op = optparse.OptionParser()
+    og = optparse.OptionGroup(op, 'foo')
+    try:
+        og.add_option('-t')
+    except TypeError:
+        real_add_option = optparse.OptionGroup.add_option
+
+        def _compat_add_option(self, *args, **kwargs):
+            enc = lambda v: (
+                v.encode('ascii', 'replace') if isinstance(v, compat_str)
+                else v)
+            bargs = [enc(a) for a in args]
+            bkwargs = dict(
+                (k, enc(v)) for k, v in kwargs.items())
+            return real_add_option(self, *bargs, **bkwargs)
+        optparse.OptionGroup.add_option = _compat_add_option
+
+if hasattr(shutil, 'get_terminal_size'):  # Python >= 3.3
+    compat_get_terminal_size = shutil.get_terminal_size
+else:
+    _terminal_size = collections.namedtuple('terminal_size', ['columns', 'lines'])
+
+    def compat_get_terminal_size():
+        columns = compat_getenv('COLUMNS', None)
+        if columns:
+            columns = int(columns)
+        else:
+            columns = None
+        lines = compat_getenv('LINES', None)
+        if lines:
+            lines = int(lines)
+        else:
+            lines = None
+
+        try:
+            sp = subprocess.Popen(
+                ['stty', 'size'],
+                stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+            out, err = sp.communicate()
+            lines, columns = map(int, out.split())
+        except Exception:
+            pass
+        return _terminal_size(columns, lines)
+
+try:
+    itertools.count(start=0, step=1)
+    compat_itertools_count = itertools.count
+except TypeError:  # Python 2.6
+    def compat_itertools_count(start=0, step=1):
+        n = start
+        while True:
+            yield n
+            n += step
+
+__all__ = [
+    'compat_HTTPError',
+    'compat_basestring',
+    'compat_chr',
+    'compat_cookiejar',
+    'compat_expanduser',
+    'compat_get_terminal_size',
+    'compat_getenv',
+    'compat_getpass',
+    'compat_html_entities',
+    'compat_http_client',
+    'compat_http_server',
+    'compat_itertools_count',
+    'compat_kwargs',
+    'compat_ord',
+    'compat_parse_qs',
+    'compat_print',
+    'compat_socket_create_connection',
+    'compat_str',
+    'compat_subprocess_get_DEVNULL',
+    'compat_urllib_error',
+    'compat_urllib_parse',
+    'compat_urllib_parse_unquote',
+    'compat_urllib_parse_unquote_plus',
+    'compat_urllib_parse_unquote_to_bytes',
+    'compat_urllib_parse_urlparse',
+    'compat_urllib_request',
+    'compat_urlparse',
+    'compat_urlretrieve',
+    'compat_xml_parse_error',
+    'shlex_quote',
+    'subprocess_check_output',
+    'workaround_optparse_bug9161',
+]

=== added directory 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/downloader'
=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/downloader/__init__.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/downloader/__init__.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/downloader/__init__.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,46 @@
+from __future__ import unicode_literals
+
+from .common import FileDownloader
+from .external import get_external_downloader
+from .f4m import F4mFD
+from .hls import HlsFD
+from .hls import NativeHlsFD
+from .http import HttpFD
+from .rtsp import RtspFD
+from .rtmp import RtmpFD
+
+from ..utils import (
+    determine_protocol,
+)
+
+PROTOCOL_MAP = {
+    'rtmp': RtmpFD,
+    'm3u8_native': NativeHlsFD,
+    'm3u8': HlsFD,
+    'mms': RtspFD,
+    'rtsp': RtspFD,
+    'f4m': F4mFD,
+}
+
+
+def get_suitable_downloader(info_dict, params={}):
+    """Get the downloader class that can handle the info dict."""
+    protocol = determine_protocol(info_dict)
+    info_dict['protocol'] = protocol
+
+    external_downloader = params.get('external_downloader')
+    if external_downloader is not None:
+        ed = get_external_downloader(external_downloader)
+        if ed.supports(info_dict):
+            return ed
+
+    if protocol == 'm3u8' and params.get('hls_prefer_native'):
+        return NativeHlsFD
+
+    return PROTOCOL_MAP.get(protocol, HttpFD)
+
+
+__all__ = [
+    'get_suitable_downloader',
+    'FileDownloader',
+]

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/downloader/common.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/downloader/common.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/downloader/common.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,372 @@
+from __future__ import division, unicode_literals
+
+import os
+import re
+import sys
+import time
+
+from ..compat import compat_str
+from ..utils import (
+    encodeFilename,
+    decodeArgument,
+    format_bytes,
+    timeconvert,
+)
+
+
+class FileDownloader(object):
+    """File Downloader class.
+
+    File downloader objects are the ones responsible of downloading the
+    actual video file and writing it to disk.
+
+    File downloaders accept a lot of parameters. In order not to saturate
+    the object constructor with arguments, it receives a dictionary of
+    options instead.
+
+    Available options:
+
+    verbose:            Print additional info to stdout.
+    quiet:              Do not print messages to stdout.
+    ratelimit:          Download speed limit, in bytes/sec.
+    retries:            Number of times to retry for HTTP error 5xx
+    buffersize:         Size of download buffer in bytes.
+    noresizebuffer:     Do not automatically resize the download buffer.
+    continuedl:         Try to continue downloads if possible.
+    noprogress:         Do not print the progress bar.
+    logtostderr:        Log messages to stderr instead of stdout.
+    consoletitle:       Display progress in console window's titlebar.
+    nopart:             Do not use temporary .part files.
+    updatetime:         Use the Last-modified header to set output file timestamps.
+    test:               Download only first bytes to test the downloader.
+    min_filesize:       Skip files smaller than this size
+    max_filesize:       Skip files larger than this size
+    xattr_set_filesize: Set ytdl.filesize user xattribute with expected size.
+                        (experimenatal)
+    external_downloader_args:  A list of additional command-line arguments for the
+                        external downloader.
+
+    Subclasses of this one must re-define the real_download method.
+    """
+
+    _TEST_FILE_SIZE = 10241
+    params = None
+
+    def __init__(self, ydl, params):
+        """Create a FileDownloader object with the given options."""
+        self.ydl = ydl
+        self._progress_hooks = []
+        self.params = params
+        self.add_progress_hook(self.report_progress)
+
+    @staticmethod
+    def format_seconds(seconds):
+        (mins, secs) = divmod(seconds, 60)
+        (hours, mins) = divmod(mins, 60)
+        if hours > 99:
+            return '--:--:--'
+        if hours == 0:
+            return '%02d:%02d' % (mins, secs)
+        else:
+            return '%02d:%02d:%02d' % (hours, mins, secs)
+
+    @staticmethod
+    def calc_percent(byte_counter, data_len):
+        if data_len is None:
+            return None
+        return float(byte_counter) / float(data_len) * 100.0
+
+    @staticmethod
+    def format_percent(percent):
+        if percent is None:
+            return '---.-%'
+        return '%6s' % ('%3.1f%%' % percent)
+
+    @staticmethod
+    def calc_eta(start, now, total, current):
+        if total is None:
+            return None
+        if now is None:
+            now = time.time()
+        dif = now - start
+        if current == 0 or dif < 0.001:  # One millisecond
+            return None
+        rate = float(current) / dif
+        return int((float(total) - float(current)) / rate)
+
+    @staticmethod
+    def format_eta(eta):
+        if eta is None:
+            return '--:--'
+        return FileDownloader.format_seconds(eta)
+
+    @staticmethod
+    def calc_speed(start, now, bytes):
+        dif = now - start
+        if bytes == 0 or dif < 0.001:  # One millisecond
+            return None
+        return float(bytes) / dif
+
+    @staticmethod
+    def format_speed(speed):
+        if speed is None:
+            return '%10s' % '---b/s'
+        return '%10s' % ('%s/s' % format_bytes(speed))
+
+    @staticmethod
+    def best_block_size(elapsed_time, bytes):
+        new_min = max(bytes / 2.0, 1.0)
+        new_max = min(max(bytes * 2.0, 1.0), 4194304)  # Do not surpass 4 MB
+        if elapsed_time < 0.001:
+            return int(new_max)
+        rate = bytes / elapsed_time
+        if rate > new_max:
+            return int(new_max)
+        if rate < new_min:
+            return int(new_min)
+        return int(rate)
+
+    @staticmethod
+    def parse_bytes(bytestr):
+        """Parse a string indicating a byte quantity into an integer."""
+        matchobj = re.match(r'(?i)^(\d+(?:\.\d+)?)([kMGTPEZY]?)$', bytestr)
+        if matchobj is None:
+            return None
+        number = float(matchobj.group(1))
+        multiplier = 1024.0 ** 'bkmgtpezy'.index(matchobj.group(2).lower())
+        return int(round(number * multiplier))
+
+    def to_screen(self, *args, **kargs):
+        self.ydl.to_screen(*args, **kargs)
+
+    def to_stderr(self, message):
+        self.ydl.to_screen(message)
+
+    def to_console_title(self, message):
+        self.ydl.to_console_title(message)
+
+    def trouble(self, *args, **kargs):
+        self.ydl.trouble(*args, **kargs)
+
+    def report_warning(self, *args, **kargs):
+        self.ydl.report_warning(*args, **kargs)
+
+    def report_error(self, *args, **kargs):
+        self.ydl.report_error(*args, **kargs)
+
+    def slow_down(self, start_time, now, byte_counter):
+        """Sleep if the download speed is over the rate limit."""
+        rate_limit = self.params.get('ratelimit', None)
+        if rate_limit is None or byte_counter == 0:
+            return
+        if now is None:
+            now = time.time()
+        elapsed = now - start_time
+        if elapsed <= 0.0:
+            return
+        speed = float(byte_counter) / elapsed
+        if speed > rate_limit:
+            time.sleep(max((byte_counter // rate_limit) - elapsed, 0))
+
+    def temp_name(self, filename):
+        """Returns a temporary filename for the given filename."""
+        if self.params.get('nopart', False) or filename == '-' or \
+                (os.path.exists(encodeFilename(filename)) and not os.path.isfile(encodeFilename(filename))):
+            return filename
+        return filename + '.part'
+
+    def undo_temp_name(self, filename):
+        if filename.endswith('.part'):
+            return filename[:-len('.part')]
+        return filename
+
+    def try_rename(self, old_filename, new_filename):
+        try:
+            if old_filename == new_filename:
+                return
+            os.rename(encodeFilename(old_filename), encodeFilename(new_filename))
+        except (IOError, OSError) as err:
+            self.report_error('unable to rename file: %s' % compat_str(err))
+
+    def try_utime(self, filename, last_modified_hdr):
+        """Try to set the last-modified time of the given file."""
+        if last_modified_hdr is None:
+            return
+        if not os.path.isfile(encodeFilename(filename)):
+            return
+        timestr = last_modified_hdr
+        if timestr is None:
+            return
+        filetime = timeconvert(timestr)
+        if filetime is None:
+            return filetime
+        # Ignore obviously invalid dates
+        if filetime == 0:
+            return
+        try:
+            os.utime(filename, (time.time(), filetime))
+        except Exception:
+            pass
+        return filetime
+
+    def report_destination(self, filename):
+        """Report destination filename."""
+        self.to_screen('[download] Destination: ' + filename)
+
+    def _report_progress_status(self, msg, is_last_line=False):
+        fullmsg = '[download] ' + msg
+        if self.params.get('progress_with_newline', False):
+            self.to_screen(fullmsg)
+        else:
+            if os.name == 'nt':
+                prev_len = getattr(self, '_report_progress_prev_line_length',
+                                   0)
+                if prev_len > len(fullmsg):
+                    fullmsg += ' ' * (prev_len - len(fullmsg))
+                self._report_progress_prev_line_length = len(fullmsg)
+                clear_line = '\r'
+            else:
+                clear_line = ('\r\x1b[K' if sys.stderr.isatty() else '\r')
+            self.to_screen(clear_line + fullmsg, skip_eol=not is_last_line)
+        self.to_console_title('youtube-dl ' + msg)
+
+    def report_progress(self, s):
+        if s['status'] == 'finished':
+            if self.params.get('noprogress', False):
+                self.to_screen('[download] Download completed')
+            else:
+                s['_total_bytes_str'] = format_bytes(s['total_bytes'])
+                if s.get('elapsed') is not None:
+                    s['_elapsed_str'] = self.format_seconds(s['elapsed'])
+                    msg_template = '100%% of %(_total_bytes_str)s in %(_elapsed_str)s'
+                else:
+                    msg_template = '100%% of %(_total_bytes_str)s'
+                self._report_progress_status(
+                    msg_template % s, is_last_line=True)
+
+        if self.params.get('noprogress'):
+            return
+
+        if s['status'] != 'downloading':
+            return
+
+        if s.get('eta') is not None:
+            s['_eta_str'] = self.format_eta(s['eta'])
+        else:
+            s['_eta_str'] = 'Unknown ETA'
+
+        if s.get('total_bytes') and s.get('downloaded_bytes') is not None:
+            s['_percent_str'] = self.format_percent(100 * s['downloaded_bytes'] / s['total_bytes'])
+        elif s.get('total_bytes_estimate') and s.get('downloaded_bytes') is not None:
+            s['_percent_str'] = self.format_percent(100 * s['downloaded_bytes'] / s['total_bytes_estimate'])
+        else:
+            if s.get('downloaded_bytes') == 0:
+                s['_percent_str'] = self.format_percent(0)
+            else:
+                s['_percent_str'] = 'Unknown %'
+
+        if s.get('speed') is not None:
+            s['_speed_str'] = self.format_speed(s['speed'])
+        else:
+            s['_speed_str'] = 'Unknown speed'
+
+        if s.get('total_bytes') is not None:
+            s['_total_bytes_str'] = format_bytes(s['total_bytes'])
+            msg_template = '%(_percent_str)s of %(_total_bytes_str)s at %(_speed_str)s ETA %(_eta_str)s'
+        elif s.get('total_bytes_estimate') is not None:
+            s['_total_bytes_estimate_str'] = format_bytes(s['total_bytes_estimate'])
+            msg_template = '%(_percent_str)s of ~%(_total_bytes_estimate_str)s at %(_speed_str)s ETA %(_eta_str)s'
+        else:
+            if s.get('downloaded_bytes') is not None:
+                s['_downloaded_bytes_str'] = format_bytes(s['downloaded_bytes'])
+                if s.get('elapsed'):
+                    s['_elapsed_str'] = self.format_seconds(s['elapsed'])
+                    msg_template = '%(_downloaded_bytes_str)s at %(_speed_str)s (%(_elapsed_str)s)'
+                else:
+                    msg_template = '%(_downloaded_bytes_str)s at %(_speed_str)s'
+            else:
+                msg_template = '%(_percent_str)s % at %(_speed_str)s ETA %(_eta_str)s'
+
+        self._report_progress_status(msg_template % s)
+
+    def report_resuming_byte(self, resume_len):
+        """Report attempt to resume at given byte."""
+        self.to_screen('[download] Resuming download at byte %s' % resume_len)
+
+    def report_retry(self, count, retries):
+        """Report retry in case of HTTP error 5xx"""
+        self.to_screen('[download] Got server HTTP error. Retrying (attempt %d of %d)...' % (count, retries))
+
+    def report_file_already_downloaded(self, file_name):
+        """Report file has already been fully downloaded."""
+        try:
+            self.to_screen('[download] %s has already been downloaded' % file_name)
+        except UnicodeEncodeError:
+            self.to_screen('[download] The file has already been downloaded')
+
+    def report_unable_to_resume(self):
+        """Report it was impossible to resume download."""
+        self.to_screen('[download] Unable to resume')
+
+    def download(self, filename, info_dict):
+        """Download to a filename using the info from info_dict
+        Return True on success and False otherwise
+        """
+
+        nooverwrites_and_exists = (
+            self.params.get('nooverwrites', False) and
+            os.path.exists(encodeFilename(filename))
+        )
+
+        continuedl_and_exists = (
+            self.params.get('continuedl', True) and
+            os.path.isfile(encodeFilename(filename)) and
+            not self.params.get('nopart', False)
+        )
+
+        # Check file already present
+        if filename != '-' and nooverwrites_and_exists or continuedl_and_exists:
+            self.report_file_already_downloaded(filename)
+            self._hook_progress({
+                'filename': filename,
+                'status': 'finished',
+                'total_bytes': os.path.getsize(encodeFilename(filename)),
+            })
+            return True
+
+        sleep_interval = self.params.get('sleep_interval')
+        if sleep_interval:
+            self.to_screen('[download] Sleeping %s seconds...' % sleep_interval)
+            time.sleep(sleep_interval)
+
+        return self.real_download(filename, info_dict)
+
+    def real_download(self, filename, info_dict):
+        """Real download process. Redefine in subclasses."""
+        raise NotImplementedError('This method must be implemented by subclasses')
+
+    def _hook_progress(self, status):
+        for ph in self._progress_hooks:
+            ph(status)
+
+    def add_progress_hook(self, ph):
+        # See YoutubeDl.py (search for progress_hooks) for a description of
+        # this interface
+        self._progress_hooks.append(ph)
+
+    def _debug_cmd(self, args, exe=None):
+        if not self.params.get('verbose', False):
+            return
+
+        str_args = [decodeArgument(a) for a in args]
+
+        if exe is None:
+            exe = os.path.basename(str_args[0])
+
+        try:
+            import pipes
+            shell_quote = lambda args: ' '.join(map(pipes.quote, str_args))
+        except ImportError:
+            shell_quote = repr
+        self.to_screen('[debug] %s command line: %s' % (
+            exe, shell_quote(str_args)))

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/downloader/external.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/downloader/external.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/downloader/external.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,136 @@
+from __future__ import unicode_literals
+
+import os.path
+import subprocess
+
+from .common import FileDownloader
+from ..utils import (
+    encodeFilename,
+    encodeArgument,
+)
+
+
+class ExternalFD(FileDownloader):
+    def real_download(self, filename, info_dict):
+        self.report_destination(filename)
+        tmpfilename = self.temp_name(filename)
+
+        retval = self._call_downloader(tmpfilename, info_dict)
+        if retval == 0:
+            fsize = os.path.getsize(encodeFilename(tmpfilename))
+            self.to_screen('\r[%s] Downloaded %s bytes' % (self.get_basename(), fsize))
+            self.try_rename(tmpfilename, filename)
+            self._hook_progress({
+                'downloaded_bytes': fsize,
+                'total_bytes': fsize,
+                'filename': filename,
+                'status': 'finished',
+            })
+            return True
+        else:
+            self.to_stderr('\n')
+            self.report_error('%s exited with code %d' % (
+                self.get_basename(), retval))
+            return False
+
+    @classmethod
+    def get_basename(cls):
+        return cls.__name__[:-2].lower()
+
+    @property
+    def exe(self):
+        return self.params.get('external_downloader')
+
+    @classmethod
+    def supports(cls, info_dict):
+        return info_dict['protocol'] in ('http', 'https', 'ftp', 'ftps')
+
+    def _source_address(self, command_option):
+        source_address = self.params.get('source_address')
+        if source_address is None:
+            return []
+        return [command_option, source_address]
+
+    def _configuration_args(self, default=[]):
+        ex_args = self.params.get('external_downloader_args')
+        if ex_args is None:
+            return default
+        assert isinstance(ex_args, list)
+        return ex_args
+
+    def _call_downloader(self, tmpfilename, info_dict):
+        """ Either overwrite this or implement _make_cmd """
+        cmd = [encodeArgument(a) for a in self._make_cmd(tmpfilename, info_dict)]
+
+        self._debug_cmd(cmd)
+
+        p = subprocess.Popen(
+            cmd, stderr=subprocess.PIPE)
+        _, stderr = p.communicate()
+        if p.returncode != 0:
+            self.to_stderr(stderr)
+        return p.returncode
+
+
+class CurlFD(ExternalFD):
+    def _make_cmd(self, tmpfilename, info_dict):
+        cmd = [self.exe, '--location', '-o', tmpfilename]
+        for key, val in info_dict['http_headers'].items():
+            cmd += ['--header', '%s: %s' % (key, val)]
+        cmd += self._source_address('--interface')
+        cmd += self._configuration_args()
+        cmd += ['--', info_dict['url']]
+        return cmd
+
+
+class WgetFD(ExternalFD):
+    def _make_cmd(self, tmpfilename, info_dict):
+        cmd = [self.exe, '-O', tmpfilename, '-nv', '--no-cookies']
+        for key, val in info_dict['http_headers'].items():
+            cmd += ['--header', '%s: %s' % (key, val)]
+        cmd += self._source_address('--bind-address')
+        cmd += self._configuration_args()
+        cmd += ['--', info_dict['url']]
+        return cmd
+
+
+class Aria2cFD(ExternalFD):
+    def _make_cmd(self, tmpfilename, info_dict):
+        cmd = [self.exe, '-c']
+        cmd += self._configuration_args([
+            '--min-split-size', '1M', '--max-connection-per-server', '4'])
+        dn = os.path.dirname(tmpfilename)
+        if dn:
+            cmd += ['--dir', dn]
+        cmd += ['--out', os.path.basename(tmpfilename)]
+        for key, val in info_dict['http_headers'].items():
+            cmd += ['--header', '%s: %s' % (key, val)]
+        cmd += self._source_address('--interface')
+        cmd += ['--', info_dict['url']]
+        return cmd
+
+
+class HttpieFD(ExternalFD):
+    def _make_cmd(self, tmpfilename, info_dict):
+        cmd = ['http', '--download', '--output', tmpfilename, info_dict['url']]
+        for key, val in info_dict['http_headers'].items():
+            cmd += ['%s:%s' % (key, val)]
+        return cmd
+
+_BY_NAME = dict(
+    (klass.get_basename(), klass)
+    for name, klass in globals().items()
+    if name.endswith('FD') and name != 'ExternalFD'
+)
+
+
+def list_external_downloaders():
+    return sorted(_BY_NAME.keys())
+
+
+def get_external_downloader(external_downloader):
+    """ Given the name of the executable, see whether we support the given
+        downloader . """
+    # Drop .exe extension on Windows
+    bn = os.path.splitext(os.path.basename(external_downloader))[0]
+    return _BY_NAME[bn]

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/downloader/f4m.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/downloader/f4m.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/downloader/f4m.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,444 @@
+from __future__ import division, unicode_literals
+
+import base64
+import io
+import itertools
+import os
+import time
+import xml.etree.ElementTree as etree
+
+from .common import FileDownloader
+from .http import HttpFD
+from ..compat import (
+    compat_urlparse,
+    compat_urllib_error,
+)
+from ..utils import (
+    struct_pack,
+    struct_unpack,
+    encodeFilename,
+    sanitize_open,
+    xpath_text,
+)
+
+
+class FlvReader(io.BytesIO):
+    """
+    Reader for Flv files
+    The file format is documented in https://www.adobe.com/devnet/f4v.html
+    """
+
+    # Utility functions for reading numbers and strings
+    def read_unsigned_long_long(self):
+        return struct_unpack('!Q', self.read(8))[0]
+
+    def read_unsigned_int(self):
+        return struct_unpack('!I', self.read(4))[0]
+
+    def read_unsigned_char(self):
+        return struct_unpack('!B', self.read(1))[0]
+
+    def read_string(self):
+        res = b''
+        while True:
+            char = self.read(1)
+            if char == b'\x00':
+                break
+            res += char
+        return res
+
+    def read_box_info(self):
+        """
+        Read a box and return the info as a tuple: (box_size, box_type, box_data)
+        """
+        real_size = size = self.read_unsigned_int()
+        box_type = self.read(4)
+        header_end = 8
+        if size == 1:
+            real_size = self.read_unsigned_long_long()
+            header_end = 16
+        return real_size, box_type, self.read(real_size - header_end)
+
+    def read_asrt(self):
+        # version
+        self.read_unsigned_char()
+        # flags
+        self.read(3)
+        quality_entry_count = self.read_unsigned_char()
+        # QualityEntryCount
+        for i in range(quality_entry_count):
+            self.read_string()
+
+        segment_run_count = self.read_unsigned_int()
+        segments = []
+        for i in range(segment_run_count):
+            first_segment = self.read_unsigned_int()
+            fragments_per_segment = self.read_unsigned_int()
+            segments.append((first_segment, fragments_per_segment))
+
+        return {
+            'segment_run': segments,
+        }
+
+    def read_afrt(self):
+        # version
+        self.read_unsigned_char()
+        # flags
+        self.read(3)
+        # time scale
+        self.read_unsigned_int()
+
+        quality_entry_count = self.read_unsigned_char()
+        # QualitySegmentUrlModifiers
+        for i in range(quality_entry_count):
+            self.read_string()
+
+        fragments_count = self.read_unsigned_int()
+        fragments = []
+        for i in range(fragments_count):
+            first = self.read_unsigned_int()
+            first_ts = self.read_unsigned_long_long()
+            duration = self.read_unsigned_int()
+            if duration == 0:
+                discontinuity_indicator = self.read_unsigned_char()
+            else:
+                discontinuity_indicator = None
+            fragments.append({
+                'first': first,
+                'ts': first_ts,
+                'duration': duration,
+                'discontinuity_indicator': discontinuity_indicator,
+            })
+
+        return {
+            'fragments': fragments,
+        }
+
+    def read_abst(self):
+        # version
+        self.read_unsigned_char()
+        # flags
+        self.read(3)
+
+        self.read_unsigned_int()  # BootstrapinfoVersion
+        # Profile,Live,Update,Reserved
+        flags = self.read_unsigned_char()
+        live = flags & 0x20 != 0
+        # time scale
+        self.read_unsigned_int()
+        # CurrentMediaTime
+        self.read_unsigned_long_long()
+        # SmpteTimeCodeOffset
+        self.read_unsigned_long_long()
+
+        self.read_string()  # MovieIdentifier
+        server_count = self.read_unsigned_char()
+        # ServerEntryTable
+        for i in range(server_count):
+            self.read_string()
+        quality_count = self.read_unsigned_char()
+        # QualityEntryTable
+        for i in range(quality_count):
+            self.read_string()
+        # DrmData
+        self.read_string()
+        # MetaData
+        self.read_string()
+
+        segments_count = self.read_unsigned_char()
+        segments = []
+        for i in range(segments_count):
+            box_size, box_type, box_data = self.read_box_info()
+            assert box_type == b'asrt'
+            segment = FlvReader(box_data).read_asrt()
+            segments.append(segment)
+        fragments_run_count = self.read_unsigned_char()
+        fragments = []
+        for i in range(fragments_run_count):
+            box_size, box_type, box_data = self.read_box_info()
+            assert box_type == b'afrt'
+            fragments.append(FlvReader(box_data).read_afrt())
+
+        return {
+            'segments': segments,
+            'fragments': fragments,
+            'live': live,
+        }
+
+    def read_bootstrap_info(self):
+        total_size, box_type, box_data = self.read_box_info()
+        assert box_type == b'abst'
+        return FlvReader(box_data).read_abst()
+
+
+def read_bootstrap_info(bootstrap_bytes):
+    return FlvReader(bootstrap_bytes).read_bootstrap_info()
+
+
+def build_fragments_list(boot_info):
+    """ Return a list of (segment, fragment) for each fragment in the video """
+    res = []
+    segment_run_table = boot_info['segments'][0]
+    fragment_run_entry_table = boot_info['fragments'][0]['fragments']
+    first_frag_number = fragment_run_entry_table[0]['first']
+    fragments_counter = itertools.count(first_frag_number)
+    for segment, fragments_count in segment_run_table['segment_run']:
+        for _ in range(fragments_count):
+            res.append((segment, next(fragments_counter)))
+
+    if boot_info['live']:
+        res = res[-2:]
+
+    return res
+
+
+def write_unsigned_int(stream, val):
+    stream.write(struct_pack('!I', val))
+
+
+def write_unsigned_int_24(stream, val):
+    stream.write(struct_pack('!I', val)[1:])
+
+
+def write_flv_header(stream):
+    """Writes the FLV header to stream"""
+    # FLV header
+    stream.write(b'FLV\x01')
+    stream.write(b'\x05')
+    stream.write(b'\x00\x00\x00\x09')
+    stream.write(b'\x00\x00\x00\x00')
+
+
+def write_metadata_tag(stream, metadata):
+    """Writes optional metadata tag to stream"""
+    SCRIPT_TAG = b'\x12'
+    FLV_TAG_HEADER_LEN = 11
+
+    if metadata:
+        stream.write(SCRIPT_TAG)
+        write_unsigned_int_24(stream, len(metadata))
+        stream.write(b'\x00\x00\x00\x00\x00\x00\x00')
+        stream.write(metadata)
+        write_unsigned_int(stream, FLV_TAG_HEADER_LEN + len(metadata))
+
+
+def _add_ns(prop):
+    return '{http://ns.adobe.com/f4m/1.0}%s' % prop
+
+
+class HttpQuietDownloader(HttpFD):
+    def to_screen(self, *args, **kargs):
+        pass
+
+
+class F4mFD(FileDownloader):
+    """
+    A downloader for f4m manifests or AdobeHDS.
+    """
+
+    def _get_unencrypted_media(self, doc):
+        media = doc.findall(_add_ns('media'))
+        if not media:
+            self.report_error('No media found')
+        for e in (doc.findall(_add_ns('drmAdditionalHeader')) +
+                  doc.findall(_add_ns('drmAdditionalHeaderSet'))):
+            # If id attribute is missing it's valid for all media nodes
+            # without drmAdditionalHeaderId or drmAdditionalHeaderSetId attribute
+            if 'id' not in e.attrib:
+                self.report_error('Missing ID in f4m DRM')
+        media = list(filter(lambda e: 'drmAdditionalHeaderId' not in e.attrib and
+                                      'drmAdditionalHeaderSetId' not in e.attrib,
+                            media))
+        if not media:
+            self.report_error('Unsupported DRM')
+        return media
+
+    def _get_bootstrap_from_url(self, bootstrap_url):
+        bootstrap = self.ydl.urlopen(bootstrap_url).read()
+        return read_bootstrap_info(bootstrap)
+
+    def _update_live_fragments(self, bootstrap_url, latest_fragment):
+        fragments_list = []
+        retries = 30
+        while (not fragments_list) and (retries > 0):
+            boot_info = self._get_bootstrap_from_url(bootstrap_url)
+            fragments_list = build_fragments_list(boot_info)
+            fragments_list = [f for f in fragments_list if f[1] > latest_fragment]
+            if not fragments_list:
+                # Retry after a while
+                time.sleep(5.0)
+                retries -= 1
+
+        if not fragments_list:
+            self.report_error('Failed to update fragments')
+
+        return fragments_list
+
+    def _parse_bootstrap_node(self, node, base_url):
+        if node.text is None:
+            bootstrap_url = compat_urlparse.urljoin(
+                base_url, node.attrib['url'])
+            boot_info = self._get_bootstrap_from_url(bootstrap_url)
+        else:
+            bootstrap_url = None
+            bootstrap = base64.b64decode(node.text.encode('ascii'))
+            boot_info = read_bootstrap_info(bootstrap)
+        return (boot_info, bootstrap_url)
+
+    def real_download(self, filename, info_dict):
+        man_url = info_dict['url']
+        requested_bitrate = info_dict.get('tbr')
+        self.to_screen('[download] Downloading f4m manifest')
+        manifest = self.ydl.urlopen(man_url).read()
+
+        doc = etree.fromstring(manifest)
+        formats = [(int(f.attrib.get('bitrate', -1)), f)
+                   for f in self._get_unencrypted_media(doc)]
+        if requested_bitrate is None:
+            # get the best format
+            formats = sorted(formats, key=lambda f: f[0])
+            rate, media = formats[-1]
+        else:
+            rate, media = list(filter(
+                lambda f: int(f[0]) == requested_bitrate, formats))[0]
+
+        base_url = compat_urlparse.urljoin(man_url, media.attrib['url'])
+        bootstrap_node = doc.find(_add_ns('bootstrapInfo'))
+        boot_info, bootstrap_url = self._parse_bootstrap_node(bootstrap_node, base_url)
+        live = boot_info['live']
+        metadata_node = media.find(_add_ns('metadata'))
+        if metadata_node is not None:
+            metadata = base64.b64decode(metadata_node.text.encode('ascii'))
+        else:
+            metadata = None
+
+        fragments_list = build_fragments_list(boot_info)
+        if self.params.get('test', False):
+            # We only download the first fragment
+            fragments_list = fragments_list[:1]
+        total_frags = len(fragments_list)
+        # For some akamai manifests we'll need to add a query to the fragment url
+        akamai_pv = xpath_text(doc, _add_ns('pv-2.0'))
+
+        self.report_destination(filename)
+        http_dl = HttpQuietDownloader(
+            self.ydl,
+            {
+                'continuedl': True,
+                'quiet': True,
+                'noprogress': True,
+                'ratelimit': self.params.get('ratelimit', None),
+                'test': self.params.get('test', False),
+            }
+        )
+        tmpfilename = self.temp_name(filename)
+        (dest_stream, tmpfilename) = sanitize_open(tmpfilename, 'wb')
+
+        write_flv_header(dest_stream)
+        if not live:
+            write_metadata_tag(dest_stream, metadata)
+
+        # This dict stores the download progress, it's updated by the progress
+        # hook
+        state = {
+            'status': 'downloading',
+            'downloaded_bytes': 0,
+            'frag_index': 0,
+            'frag_count': total_frags,
+            'filename': filename,
+            'tmpfilename': tmpfilename,
+        }
+        start = time.time()
+
+        def frag_progress_hook(s):
+            if s['status'] not in ('downloading', 'finished'):
+                return
+
+            frag_total_bytes = s.get('total_bytes', 0)
+            if s['status'] == 'finished':
+                state['downloaded_bytes'] += frag_total_bytes
+                state['frag_index'] += 1
+
+            estimated_size = (
+                (state['downloaded_bytes'] + frag_total_bytes) /
+                (state['frag_index'] + 1) * total_frags)
+            time_now = time.time()
+            state['total_bytes_estimate'] = estimated_size
+            state['elapsed'] = time_now - start
+
+            if s['status'] == 'finished':
+                progress = self.calc_percent(state['frag_index'], total_frags)
+            else:
+                frag_downloaded_bytes = s['downloaded_bytes']
+                frag_progress = self.calc_percent(frag_downloaded_bytes,
+                                                  frag_total_bytes)
+                progress = self.calc_percent(state['frag_index'], total_frags)
+                progress += frag_progress / float(total_frags)
+
+                state['eta'] = self.calc_eta(
+                    start, time_now, estimated_size, state['downloaded_bytes'] + frag_downloaded_bytes)
+                state['speed'] = s.get('speed')
+            self._hook_progress(state)
+
+        http_dl.add_progress_hook(frag_progress_hook)
+
+        frags_filenames = []
+        while fragments_list:
+            seg_i, frag_i = fragments_list.pop(0)
+            name = 'Seg%d-Frag%d' % (seg_i, frag_i)
+            url = base_url + name
+            if akamai_pv:
+                url += '?' + akamai_pv.strip(';')
+            if info_dict.get('extra_param_to_segment_url'):
+                url += info_dict.get('extra_param_to_segment_url')
+            frag_filename = '%s-%s' % (tmpfilename, name)
+            try:
+                success = http_dl.download(frag_filename, {'url': url})
+                if not success:
+                    return False
+                with open(frag_filename, 'rb') as down:
+                    down_data = down.read()
+                    reader = FlvReader(down_data)
+                    while True:
+                        _, box_type, box_data = reader.read_box_info()
+                        if box_type == b'mdat':
+                            dest_stream.write(box_data)
+                            break
+                if live:
+                    os.remove(frag_filename)
+                else:
+                    frags_filenames.append(frag_filename)
+            except (compat_urllib_error.HTTPError, ) as err:
+                if live and (err.code == 404 or err.code == 410):
+                    # We didn't keep up with the live window. Continue
+                    # with the next available fragment.
+                    msg = 'Fragment %d unavailable' % frag_i
+                    self.report_warning(msg)
+                    fragments_list = []
+                else:
+                    raise
+
+            if not fragments_list and live and bootstrap_url:
+                fragments_list = self._update_live_fragments(bootstrap_url, frag_i)
+                total_frags += len(fragments_list)
+                if fragments_list and (fragments_list[0][1] > frag_i + 1):
+                    msg = 'Missed %d fragments' % (fragments_list[0][1] - (frag_i + 1))
+                    self.report_warning(msg)
+
+        dest_stream.close()
+
+        elapsed = time.time() - start
+        self.try_rename(tmpfilename, filename)
+        for frag_file in frags_filenames:
+            os.remove(frag_file)
+
+        fsize = os.path.getsize(encodeFilename(filename))
+        self._hook_progress({
+            'downloaded_bytes': fsize,
+            'total_bytes': fsize,
+            'filename': filename,
+            'status': 'finished',
+            'elapsed': elapsed,
+        })
+
+        return True

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/downloader/hls.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/downloader/hls.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/downloader/hls.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,104 @@
+from __future__ import unicode_literals
+
+import os
+import re
+import subprocess
+
+from ..postprocessor.ffmpeg import FFmpegPostProcessor
+from .common import FileDownloader
+from ..compat import (
+    compat_urlparse,
+    compat_urllib_request,
+)
+from ..utils import (
+    encodeArgument,
+    encodeFilename,
+)
+
+
+class HlsFD(FileDownloader):
+    def real_download(self, filename, info_dict):
+        url = info_dict['url']
+        self.report_destination(filename)
+        tmpfilename = self.temp_name(filename)
+
+        ffpp = FFmpegPostProcessor(downloader=self)
+        if not ffpp.available:
+            self.report_error('m3u8 download detected but ffmpeg or avconv could not be found. Please install one.')
+            return False
+        ffpp.check_version()
+
+        args = [
+            encodeArgument(opt)
+            for opt in (ffpp.executable, '-y', '-i', url, '-f', 'mp4', '-c', 'copy', '-bsf:a', 'aac_adtstoasc')]
+        args.append(encodeFilename(tmpfilename, True))
+
+        retval = subprocess.call(args)
+        if retval == 0:
+            fsize = os.path.getsize(encodeFilename(tmpfilename))
+            self.to_screen('\r[%s] %s bytes' % (args[0], fsize))
+            self.try_rename(tmpfilename, filename)
+            self._hook_progress({
+                'downloaded_bytes': fsize,
+                'total_bytes': fsize,
+                'filename': filename,
+                'status': 'finished',
+            })
+            return True
+        else:
+            self.to_stderr('\n')
+            self.report_error('%s exited with code %d' % (ffpp.basename, retval))
+            return False
+
+
+class NativeHlsFD(FileDownloader):
+    """ A more limited implementation that does not require ffmpeg """
+
+    def real_download(self, filename, info_dict):
+        url = info_dict['url']
+        self.report_destination(filename)
+        tmpfilename = self.temp_name(filename)
+
+        self.to_screen(
+            '[hlsnative] %s: Downloading m3u8 manifest' % info_dict['id'])
+        data = self.ydl.urlopen(url).read()
+        s = data.decode('utf-8', 'ignore')
+        segment_urls = []
+        for line in s.splitlines():
+            line = line.strip()
+            if line and not line.startswith('#'):
+                segment_url = (
+                    line
+                    if re.match(r'^https?://', line)
+                    else compat_urlparse.urljoin(url, line))
+                segment_urls.append(segment_url)
+
+        is_test = self.params.get('test', False)
+        remaining_bytes = self._TEST_FILE_SIZE if is_test else None
+        byte_counter = 0
+        with open(tmpfilename, 'wb') as outf:
+            for i, segurl in enumerate(segment_urls):
+                self.to_screen(
+                    '[hlsnative] %s: Downloading segment %d / %d' %
+                    (info_dict['id'], i + 1, len(segment_urls)))
+                seg_req = compat_urllib_request.Request(segurl)
+                if remaining_bytes is not None:
+                    seg_req.add_header('Range', 'bytes=0-%d' % (remaining_bytes - 1))
+
+                segment = self.ydl.urlopen(seg_req).read()
+                if remaining_bytes is not None:
+                    segment = segment[:remaining_bytes]
+                    remaining_bytes -= len(segment)
+                outf.write(segment)
+                byte_counter += len(segment)
+                if remaining_bytes is not None and remaining_bytes <= 0:
+                    break
+
+        self._hook_progress({
+            'downloaded_bytes': byte_counter,
+            'total_bytes': byte_counter,
+            'filename': filename,
+            'status': 'finished',
+        })
+        self.try_rename(tmpfilename, filename)
+        return True

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/downloader/http.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/downloader/http.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/downloader/http.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,234 @@
+from __future__ import unicode_literals
+
+import errno
+import os
+import socket
+import time
+
+from .common import FileDownloader
+from ..compat import (
+    compat_urllib_request,
+    compat_urllib_error,
+)
+from ..utils import (
+    ContentTooShortError,
+    encodeFilename,
+    sanitize_open,
+)
+
+
+class HttpFD(FileDownloader):
+    def real_download(self, filename, info_dict):
+        url = info_dict['url']
+        tmpfilename = self.temp_name(filename)
+        stream = None
+
+        # Do not include the Accept-Encoding header
+        headers = {'Youtubedl-no-compression': 'True'}
+        add_headers = info_dict.get('http_headers')
+        if add_headers:
+            headers.update(add_headers)
+        basic_request = compat_urllib_request.Request(url, None, headers)
+        request = compat_urllib_request.Request(url, None, headers)
+
+        is_test = self.params.get('test', False)
+
+        if is_test:
+            request.add_header('Range', 'bytes=0-%s' % str(self._TEST_FILE_SIZE - 1))
+
+        # Establish possible resume length
+        if os.path.isfile(encodeFilename(tmpfilename)):
+            resume_len = os.path.getsize(encodeFilename(tmpfilename))
+        else:
+            resume_len = 0
+
+        open_mode = 'wb'
+        if resume_len != 0:
+            if self.params.get('continuedl', True):
+                self.report_resuming_byte(resume_len)
+                request.add_header('Range', 'bytes=%d-' % resume_len)
+                open_mode = 'ab'
+            else:
+                resume_len = 0
+
+        count = 0
+        retries = self.params.get('retries', 0)
+        while count <= retries:
+            # Establish connection
+            try:
+                data = self.ydl.urlopen(request)
+                break
+            except (compat_urllib_error.HTTPError, ) as err:
+                if (err.code < 500 or err.code >= 600) and err.code != 416:
+                    # Unexpected HTTP error
+                    raise
+                elif err.code == 416:
+                    # Unable to resume (requested range not satisfiable)
+                    try:
+                        # Open the connection again without the range header
+                        data = self.ydl.urlopen(basic_request)
+                        content_length = data.info()['Content-Length']
+                    except (compat_urllib_error.HTTPError, ) as err:
+                        if err.code < 500 or err.code >= 600:
+                            raise
+                    else:
+                        # Examine the reported length
+                        if (content_length is not None and
+                                (resume_len - 100 < int(content_length) < resume_len + 100)):
+                            # The file had already been fully downloaded.
+                            # Explanation to the above condition: in issue #175 it was revealed that
+                            # YouTube sometimes adds or removes a few bytes from the end of the file,
+                            # changing the file size slightly and causing problems for some users. So
+                            # I decided to implement a suggested change and consider the file
+                            # completely downloaded if the file size differs less than 100 bytes from
+                            # the one in the hard drive.
+                            self.report_file_already_downloaded(filename)
+                            self.try_rename(tmpfilename, filename)
+                            self._hook_progress({
+                                'filename': filename,
+                                'status': 'finished',
+                                'downloaded_bytes': resume_len,
+                                'total_bytes': resume_len,
+                            })
+                            return True
+                        else:
+                            # The length does not match, we start the download over
+                            self.report_unable_to_resume()
+                            resume_len = 0
+                            open_mode = 'wb'
+                            break
+            except socket.error as e:
+                if e.errno != errno.ECONNRESET:
+                    # Connection reset is no problem, just retry
+                    raise
+
+            # Retry
+            count += 1
+            if count <= retries:
+                self.report_retry(count, retries)
+
+        if count > retries:
+            self.report_error('giving up after %s retries' % retries)
+            return False
+
+        data_len = data.info().get('Content-length', None)
+
+        # Range HTTP header may be ignored/unsupported by a webserver
+        # (e.g. extractor/scivee.py, extractor/bambuser.py).
+        # However, for a test we still would like to download just a piece of a file.
+        # To achieve this we limit data_len to _TEST_FILE_SIZE and manually control
+        # block size when downloading a file.
+        if is_test and (data_len is None or int(data_len) > self._TEST_FILE_SIZE):
+            data_len = self._TEST_FILE_SIZE
+
+        if data_len is not None:
+            data_len = int(data_len) + resume_len
+            min_data_len = self.params.get("min_filesize", None)
+            max_data_len = self.params.get("max_filesize", None)
+            if min_data_len is not None and data_len < min_data_len:
+                self.to_screen('\r[download] File is smaller than min-filesize (%s bytes < %s bytes). Aborting.' % (data_len, min_data_len))
+                return False
+            if max_data_len is not None and data_len > max_data_len:
+                self.to_screen('\r[download] File is larger than max-filesize (%s bytes > %s bytes). Aborting.' % (data_len, max_data_len))
+                return False
+
+        byte_counter = 0 + resume_len
+        block_size = self.params.get('buffersize', 1024)
+        start = time.time()
+
+        # measure time over whole while-loop, so slow_down() and best_block_size() work together properly
+        now = None  # needed for slow_down() in the first loop run
+        before = start  # start measuring
+        while True:
+
+            # Download and write
+            data_block = data.read(block_size if not is_test else min(block_size, data_len - byte_counter))
+            byte_counter += len(data_block)
+
+            # exit loop when download is finished
+            if len(data_block) == 0:
+                break
+
+            # Open destination file just in time
+            if stream is None:
+                try:
+                    (stream, tmpfilename) = sanitize_open(tmpfilename, open_mode)
+                    assert stream is not None
+                    filename = self.undo_temp_name(tmpfilename)
+                    self.report_destination(filename)
+                except (OSError, IOError) as err:
+                    self.report_error('unable to open for writing: %s' % str(err))
+                    return False
+
+                if self.params.get('xattr_set_filesize', False) and data_len is not None:
+                    try:
+                        import xattr
+                        xattr.setxattr(tmpfilename, 'user.ytdl.filesize', str(data_len))
+                    except(OSError, IOError, ImportError) as err:
+                        self.report_error('unable to set filesize xattr: %s' % str(err))
+
+            try:
+                stream.write(data_block)
+            except (IOError, OSError) as err:
+                self.to_stderr('\n')
+                self.report_error('unable to write data: %s' % str(err))
+                return False
+
+            # Apply rate limit
+            self.slow_down(start, now, byte_counter - resume_len)
+
+            # end measuring of one loop run
+            now = time.time()
+            after = now
+
+            # Adjust block size
+            if not self.params.get('noresizebuffer', False):
+                block_size = self.best_block_size(after - before, len(data_block))
+
+            before = after
+
+            # Progress message
+            speed = self.calc_speed(start, now, byte_counter - resume_len)
+            if data_len is None:
+                eta = None
+            else:
+                eta = self.calc_eta(start, time.time(), data_len - resume_len, byte_counter - resume_len)
+
+            self._hook_progress({
+                'status': 'downloading',
+                'downloaded_bytes': byte_counter,
+                'total_bytes': data_len,
+                'tmpfilename': tmpfilename,
+                'filename': filename,
+                'eta': eta,
+                'speed': speed,
+                'elapsed': now - start,
+            })
+
+            if is_test and byte_counter == data_len:
+                break
+
+        if stream is None:
+            self.to_stderr('\n')
+            self.report_error('Did not get any data blocks')
+            return False
+        if tmpfilename != '-':
+            stream.close()
+
+        if data_len is not None and byte_counter != data_len:
+            raise ContentTooShortError(byte_counter, int(data_len))
+        self.try_rename(tmpfilename, filename)
+
+        # Update file modification time
+        if self.params.get('updatetime', True):
+            info_dict['filetime'] = self.try_utime(filename, data.info().get('last-modified', None))
+
+        self._hook_progress({
+            'downloaded_bytes': byte_counter,
+            'total_bytes': byte_counter,
+            'filename': filename,
+            'status': 'finished',
+            'elapsed': time.time() - start,
+        })
+
+        return True

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/downloader/rtmp.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/downloader/rtmp.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/downloader/rtmp.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,203 @@
+from __future__ import unicode_literals
+
+import os
+import re
+import subprocess
+import time
+
+from .common import FileDownloader
+from ..compat import compat_str
+from ..utils import (
+    check_executable,
+    encodeFilename,
+    encodeArgument,
+    get_exe_version,
+)
+
+
+def rtmpdump_version():
+    return get_exe_version(
+        'rtmpdump', ['--help'], r'(?i)RTMPDump\s*v?([0-9a-zA-Z._-]+)')
+
+
+class RtmpFD(FileDownloader):
+    def real_download(self, filename, info_dict):
+        def run_rtmpdump(args):
+            start = time.time()
+            resume_percent = None
+            resume_downloaded_data_len = None
+            proc = subprocess.Popen(args, stderr=subprocess.PIPE)
+            cursor_in_new_line = True
+            proc_stderr_closed = False
+            while not proc_stderr_closed:
+                # read line from stderr
+                line = ''
+                while True:
+                    char = proc.stderr.read(1)
+                    if not char:
+                        proc_stderr_closed = True
+                        break
+                    if char in [b'\r', b'\n']:
+                        break
+                    line += char.decode('ascii', 'replace')
+                if not line:
+                    # proc_stderr_closed is True
+                    continue
+                mobj = re.search(r'([0-9]+\.[0-9]{3}) kB / [0-9]+\.[0-9]{2} sec \(([0-9]{1,2}\.[0-9])%\)', line)
+                if mobj:
+                    downloaded_data_len = int(float(mobj.group(1)) * 1024)
+                    percent = float(mobj.group(2))
+                    if not resume_percent:
+                        resume_percent = percent
+                        resume_downloaded_data_len = downloaded_data_len
+                    time_now = time.time()
+                    eta = self.calc_eta(start, time_now, 100 - resume_percent, percent - resume_percent)
+                    speed = self.calc_speed(start, time_now, downloaded_data_len - resume_downloaded_data_len)
+                    data_len = None
+                    if percent > 0:
+                        data_len = int(downloaded_data_len * 100 / percent)
+                    self._hook_progress({
+                        'status': 'downloading',
+                        'downloaded_bytes': downloaded_data_len,
+                        'total_bytes_estimate': data_len,
+                        'tmpfilename': tmpfilename,
+                        'filename': filename,
+                        'eta': eta,
+                        'elapsed': time_now - start,
+                        'speed': speed,
+                    })
+                    cursor_in_new_line = False
+                else:
+                    # no percent for live streams
+                    mobj = re.search(r'([0-9]+\.[0-9]{3}) kB / [0-9]+\.[0-9]{2} sec', line)
+                    if mobj:
+                        downloaded_data_len = int(float(mobj.group(1)) * 1024)
+                        time_now = time.time()
+                        speed = self.calc_speed(start, time_now, downloaded_data_len)
+                        self._hook_progress({
+                            'downloaded_bytes': downloaded_data_len,
+                            'tmpfilename': tmpfilename,
+                            'filename': filename,
+                            'status': 'downloading',
+                            'elapsed': time_now - start,
+                            'speed': speed,
+                        })
+                        cursor_in_new_line = False
+                    elif self.params.get('verbose', False):
+                        if not cursor_in_new_line:
+                            self.to_screen('')
+                        cursor_in_new_line = True
+                        self.to_screen('[rtmpdump] ' + line)
+            proc.wait()
+            if not cursor_in_new_line:
+                self.to_screen('')
+            return proc.returncode
+
+        url = info_dict['url']
+        player_url = info_dict.get('player_url', None)
+        page_url = info_dict.get('page_url', None)
+        app = info_dict.get('app', None)
+        play_path = info_dict.get('play_path', None)
+        tc_url = info_dict.get('tc_url', None)
+        flash_version = info_dict.get('flash_version', None)
+        live = info_dict.get('rtmp_live', False)
+        conn = info_dict.get('rtmp_conn', None)
+        protocol = info_dict.get('rtmp_protocol', None)
+        real_time = info_dict.get('rtmp_real_time', False)
+        no_resume = info_dict.get('no_resume', False)
+        continue_dl = info_dict.get('continuedl', True)
+
+        self.report_destination(filename)
+        tmpfilename = self.temp_name(filename)
+        test = self.params.get('test', False)
+
+        # Check for rtmpdump first
+        if not check_executable('rtmpdump', ['-h']):
+            self.report_error('RTMP download detected but "rtmpdump" could not be run. Please install it.')
+            return False
+
+        # Download using rtmpdump. rtmpdump returns exit code 2 when
+        # the connection was interrumpted and resuming appears to be
+        # possible. This is part of rtmpdump's normal usage, AFAIK.
+        basic_args = [
+            'rtmpdump', '--verbose', '-r', url,
+            '-o', tmpfilename]
+        if player_url is not None:
+            basic_args += ['--swfVfy', player_url]
+        if page_url is not None:
+            basic_args += ['--pageUrl', page_url]
+        if app is not None:
+            basic_args += ['--app', app]
+        if play_path is not None:
+            basic_args += ['--playpath', play_path]
+        if tc_url is not None:
+            basic_args += ['--tcUrl', tc_url]
+        if test:
+            basic_args += ['--stop', '1']
+        if flash_version is not None:
+            basic_args += ['--flashVer', flash_version]
+        if live:
+            basic_args += ['--live']
+        if isinstance(conn, list):
+            for entry in conn:
+                basic_args += ['--conn', entry]
+        elif isinstance(conn, compat_str):
+            basic_args += ['--conn', conn]
+        if protocol is not None:
+            basic_args += ['--protocol', protocol]
+        if real_time:
+            basic_args += ['--realtime']
+
+        args = basic_args
+        if not no_resume and continue_dl and not live:
+            args += ['--resume']
+        if not live and continue_dl:
+            args += ['--skip', '1']
+
+        args = [encodeArgument(a) for a in args]
+
+        self._debug_cmd(args, exe='rtmpdump')
+
+        RD_SUCCESS = 0
+        RD_FAILED = 1
+        RD_INCOMPLETE = 2
+        RD_NO_CONNECT = 3
+
+        retval = run_rtmpdump(args)
+
+        if retval == RD_NO_CONNECT:
+            self.report_error('[rtmpdump] Could not connect to RTMP server.')
+            return False
+
+        while (retval == RD_INCOMPLETE or retval == RD_FAILED) and not test and not live:
+            prevsize = os.path.getsize(encodeFilename(tmpfilename))
+            self.to_screen('[rtmpdump] %s bytes' % prevsize)
+            time.sleep(5.0)  # This seems to be needed
+            args = basic_args + ['--resume']
+            if retval == RD_FAILED:
+                args += ['--skip', '1']
+            args = [encodeArgument(a) for a in args]
+            retval = run_rtmpdump(args)
+            cursize = os.path.getsize(encodeFilename(tmpfilename))
+            if prevsize == cursize and retval == RD_FAILED:
+                break
+            # Some rtmp streams seem abort after ~ 99.8%. Don't complain for those
+            if prevsize == cursize and retval == RD_INCOMPLETE and cursize > 1024:
+                self.to_screen('[rtmpdump] Could not download the whole video. This can happen for some advertisements.')
+                retval = RD_SUCCESS
+                break
+        if retval == RD_SUCCESS or (test and retval == RD_INCOMPLETE):
+            fsize = os.path.getsize(encodeFilename(tmpfilename))
+            self.to_screen('[rtmpdump] %s bytes' % fsize)
+            self.try_rename(tmpfilename, filename)
+            self._hook_progress({
+                'downloaded_bytes': fsize,
+                'total_bytes': fsize,
+                'filename': filename,
+                'status': 'finished',
+            })
+            return True
+        else:
+            self.to_stderr('\n')
+            self.report_error('rtmpdump exited with code %d' % retval)
+            return False

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/downloader/rtsp.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/downloader/rtsp.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/downloader/rtsp.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,45 @@
+from __future__ import unicode_literals
+
+import os
+import subprocess
+
+from .common import FileDownloader
+from ..utils import (
+    check_executable,
+    encodeFilename,
+)
+
+
+class RtspFD(FileDownloader):
+    def real_download(self, filename, info_dict):
+        url = info_dict['url']
+        self.report_destination(filename)
+        tmpfilename = self.temp_name(filename)
+
+        if check_executable('mplayer', ['-h']):
+            args = [
+                'mplayer', '-really-quiet', '-vo', 'null', '-vc', 'dummy',
+                '-dumpstream', '-dumpfile', tmpfilename, url]
+        elif check_executable('mpv', ['-h']):
+            args = [
+                'mpv', '-really-quiet', '--vo=null', '--stream-dump=' + tmpfilename, url]
+        else:
+            self.report_error('MMS or RTSP download detected but neither "mplayer" nor "mpv" could be run. Please install any.')
+            return False
+
+        retval = subprocess.call(args)
+        if retval == 0:
+            fsize = os.path.getsize(encodeFilename(tmpfilename))
+            self.to_screen('\r[%s] %s bytes' % (args[0], fsize))
+            self.try_rename(tmpfilename, filename)
+            self._hook_progress({
+                'downloaded_bytes': fsize,
+                'total_bytes': fsize,
+                'filename': filename,
+                'status': 'finished',
+            })
+            return True
+        else:
+            self.to_stderr('\n')
+            self.report_error('%s exited with code %d' % (args[0], retval))
+            return False

=== added directory 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor'
=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/__init__.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/__init__.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/__init__.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,818 @@
+from __future__ import unicode_literals
+
+from .abc import ABCIE
+from .abc7news import Abc7NewsIE
+from .academicearth import AcademicEarthCourseIE
+from .addanime import AddAnimeIE
+from .adobetv import (
+    AdobeTVIE,
+    AdobeTVVideoIE,
+)
+from .adultswim import AdultSwimIE
+from .aftenposten import AftenpostenIE
+from .aftonbladet import AftonbladetIE
+from .airmozilla import AirMozillaIE
+from .aljazeera import AlJazeeraIE
+from .alphaporno import AlphaPornoIE
+from .anitube import AnitubeIE
+from .anysex import AnySexIE
+from .aol import AolIE
+from .allocine import AllocineIE
+from .aparat import AparatIE
+from .appletrailers import AppleTrailersIE
+from .archiveorg import ArchiveOrgIE
+from .ard import ARDIE, ARDMediathekIE
+from .arte import (
+    ArteTvIE,
+    ArteTVPlus7IE,
+    ArteTVCreativeIE,
+    ArteTVConcertIE,
+    ArteTVFutureIE,
+    ArteTVDDCIE,
+    ArteTVEmbedIE,
+)
+from .atresplayer import AtresPlayerIE
+from .atttechchannel import ATTTechChannelIE
+from .audiomack import AudiomackIE, AudiomackAlbumIE
+from .azubu import AzubuIE
+from .baidu import BaiduVideoIE
+from .bambuser import BambuserIE, BambuserChannelIE
+from .bandcamp import BandcampIE, BandcampAlbumIE
+from .bbccouk import BBCCoUkIE
+from .beeg import BeegIE
+from .behindkink import BehindKinkIE
+from .beatportpro import BeatportProIE
+from .bet import BetIE
+from .bild import BildIE
+from .bilibili import BiliBiliIE
+from .blinkx import BlinkxIE
+from .bliptv import BlipTVIE, BlipTVUserIE
+from .bloomberg import BloombergIE
+from .bpb import BpbIE
+from .br import BRIE
+from .breakcom import BreakIE
+from .brightcove import BrightcoveIE
+from .buzzfeed import BuzzFeedIE
+from .byutv import BYUtvIE
+from .c56 import C56IE
+from .camdemy import (
+    CamdemyIE,
+    CamdemyFolderIE
+)
+from .canal13cl import Canal13clIE
+from .canalplus import CanalplusIE
+from .canalc2 import Canalc2IE
+from .cbs import CBSIE
+from .cbsnews import CBSNewsIE
+from .cbssports import CBSSportsIE
+from .ccc import CCCIE
+from .ceskatelevize import CeskaTelevizeIE
+from .channel9 import Channel9IE
+from .chilloutzone import ChilloutzoneIE
+from .chirbit import (
+    ChirbitIE,
+    ChirbitProfileIE,
+)
+from .cinchcast import CinchcastIE
+from .cinemassacre import CinemassacreIE
+from .clipfish import ClipfishIE
+from .cliphunter import CliphunterIE
+from .clipsyndicate import ClipsyndicateIE
+from .cloudy import CloudyIE
+from .clubic import ClubicIE
+from .cmt import CMTIE
+from .cnet import CNETIE
+from .cnn import (
+    CNNIE,
+    CNNBlogsIE,
+    CNNArticleIE,
+)
+from .collegehumor import CollegeHumorIE
+from .collegerama import CollegeRamaIE
+from .comedycentral import ComedyCentralIE, ComedyCentralShowsIE
+from .comcarcoff import ComCarCoffIE
+from .commonmistakes import CommonMistakesIE, UnicodeBOMIE
+from .condenast import CondeNastIE
+from .cracked import CrackedIE
+from .criterion import CriterionIE
+from .crooksandliars import CrooksAndLiarsIE
+from .crunchyroll import (
+    CrunchyrollIE,
+    CrunchyrollShowPlaylistIE
+)
+from .cspan import CSpanIE
+from .ctsnews import CtsNewsIE
+from .dailymotion import (
+    DailymotionIE,
+    DailymotionPlaylistIE,
+    DailymotionUserIE,
+    DailymotionCloudIE,
+)
+from .daum import DaumIE
+from .dbtv import DBTVIE
+from .dctp import DctpTvIE
+from .deezer import DeezerPlaylistIE
+from .dfb import DFBIE
+from .dhm import DHMIE
+from .dotsub import DotsubIE
+from .douyutv import DouyuTVIE
+from .dramafever import (
+    DramaFeverIE,
+    DramaFeverSeriesIE,
+)
+from .dreisat import DreiSatIE
+from .drbonanza import DRBonanzaIE
+from .drtuber import DrTuberIE
+from .drtv import DRTVIE
+from .dvtv import DVTVIE
+from .dump import DumpIE
+from .dumpert import DumpertIE
+from .defense import DefenseGouvFrIE
+from .discovery import DiscoveryIE
+from .divxstage import DivxStageIE
+from .dropbox import DropboxIE
+from .eagleplatform import EaglePlatformIE
+from .ebaumsworld import EbaumsWorldIE
+from .echomsk import EchoMskIE
+from .ehow import EHowIE
+from .eighttracks import EightTracksIE
+from .einthusan import EinthusanIE
+from .eitb import EitbIE
+from .ellentv import (
+    EllenTVIE,
+    EllenTVClipsIE,
+)
+from .elpais import ElPaisIE
+from .embedly import EmbedlyIE
+from .engadget import EngadgetIE
+from .eporner import EpornerIE
+from .eroprofile import EroProfileIE
+from .escapist import EscapistIE
+from .espn import ESPNIE
+from .everyonesmixtape import EveryonesMixtapeIE
+from .exfm import ExfmIE
+from .expotv import ExpoTVIE
+from .extremetube import ExtremeTubeIE
+from .facebook import FacebookIE
+from .faz import FazIE
+from .fc2 import FC2IE
+from .firstpost import FirstpostIE
+from .firsttv import FirstTVIE
+from .fivemin import FiveMinIE
+from .fivetv import FiveTVIE
+from .fktv import (
+    FKTVIE,
+    FKTVPosteckeIE,
+)
+from .flickr import FlickrIE
+from .folketinget import FolketingetIE
+from .footyroom import FootyRoomIE
+from .fourtube import FourTubeIE
+from .foxgay import FoxgayIE
+from .foxnews import FoxNewsIE
+from .foxsports import FoxSportsIE
+from .franceculture import FranceCultureIE
+from .franceinter import FranceInterIE
+from .francetv import (
+    PluzzIE,
+    FranceTvInfoIE,
+    FranceTVIE,
+    GenerationQuoiIE,
+    CultureboxIE,
+)
+from .freesound import FreesoundIE
+from .freespeech import FreespeechIE
+from .freevideo import FreeVideoIE
+from .funnyordie import FunnyOrDieIE
+from .gamekings import GamekingsIE
+from .gameone import (
+    GameOneIE,
+    GameOnePlaylistIE,
+)
+from .gamersyde import GamersydeIE
+from .gamespot import GameSpotIE
+from .gamestar import GameStarIE
+from .gametrailers import GametrailersIE
+from .gazeta import GazetaIE
+from .gdcvault import GDCVaultIE
+from .generic import GenericIE
+from .gfycat import GfycatIE
+from .giantbomb import GiantBombIE
+from .giga import GigaIE
+from .glide import GlideIE
+from .globo import GloboIE
+from .godtube import GodTubeIE
+from .goldenmoustache import GoldenMoustacheIE
+from .golem import GolemIE
+from .googleplus import GooglePlusIE
+from .googlesearch import GoogleSearchIE
+from .gorillavid import GorillaVidIE
+from .goshgay import GoshgayIE
+from .groupon import GrouponIE
+from .hark import HarkIE
+from .hearthisat import HearThisAtIE
+from .heise import HeiseIE
+from .hellporno import HellPornoIE
+from .helsinki import HelsinkiIE
+from .hentaistigma import HentaiStigmaIE
+from .historicfilms import HistoricFilmsIE
+from .history import HistoryIE
+from .hitbox import HitboxIE, HitboxLiveIE
+from .hornbunny import HornBunnyIE
+from .hostingbulk import HostingBulkIE
+from .hotnewhiphop import HotNewHipHopIE
+from .howcast import HowcastIE
+from .howstuffworks import HowStuffWorksIE
+from .huffpost import HuffPostIE
+from .hypem import HypemIE
+from .iconosquare import IconosquareIE
+from .ign import IGNIE, OneUPIE
+from .imdb import (
+    ImdbIE,
+    ImdbListIE
+)
+from .imgur import ImgurIE
+from .ina import InaIE
+from .infoq import InfoQIE
+from .instagram import InstagramIE, InstagramUserIE
+from .internetvideoarchive import InternetVideoArchiveIE
+from .iprima import IPrimaIE
+from .iqiyi import IqiyiIE
+from .ivi import (
+    IviIE,
+    IviCompilationIE
+)
+from .izlesene import IzleseneIE
+from .jadorecettepub import JadoreCettePubIE
+from .jeuxvideo import JeuxVideoIE
+from .jove import JoveIE
+from .jukebox import JukeboxIE
+from .jpopsukitv import JpopsukiIE
+from .kaltura import KalturaIE
+from .kanalplay import KanalPlayIE
+from .kankan import KankanIE
+from .karaoketv import KaraoketvIE
+from .karrierevideos import KarriereVideosIE
+from .keezmovies import KeezMoviesIE
+from .khanacademy import KhanAcademyIE
+from .kickstarter import KickStarterIE
+from .keek import KeekIE
+from .kontrtube import KontrTubeIE
+from .krasview import KrasViewIE
+from .ku6 import Ku6IE
+from .kuwo import (
+    KuwoIE,
+    KuwoAlbumIE,
+    KuwoChartIE,
+    KuwoSingerIE,
+    KuwoCategoryIE,
+    KuwoMvIE,
+)
+from .la7 import LA7IE
+from .laola1tv import Laola1TvIE
+from .letv import (
+    LetvIE,
+    LetvTvIE,
+    LetvPlaylistIE
+)
+from .libsyn import LibsynIE
+from .lifenews import (
+    LifeNewsIE,
+    LifeEmbedIE,
+)
+from .liveleak import LiveLeakIE
+from .livestream import (
+    LivestreamIE,
+    LivestreamOriginalIE,
+    LivestreamShortenerIE,
+)
+from .lnkgo import LnkGoIE
+from .lrt import LRTIE
+from .lynda import (
+    LyndaIE,
+    LyndaCourseIE
+)
+from .m6 import M6IE
+from .macgamestore import MacGameStoreIE
+from .mailru import MailRuIE
+from .malemotion import MalemotionIE
+from .mdr import MDRIE
+from .megavideoz import MegaVideozIE
+from .metacafe import MetacafeIE
+from .metacritic import MetacriticIE
+from .mgoon import MgoonIE
+from .minhateca import MinhatecaIE
+from .ministrygrid import MinistryGridIE
+from .miomio import MioMioIE
+from .mit import TechTVMITIE, MITIE, OCWMITIE
+from .mitele import MiTeleIE
+from .mixcloud import MixcloudIE
+from .mlb import MLBIE
+from .mpora import MporaIE
+from .moevideo import MoeVideoIE
+from .mofosex import MofosexIE
+from .mojvideo import MojvideoIE
+from .moniker import MonikerIE
+from .mooshare import MooshareIE
+from .morningstar import MorningstarIE
+from .motherless import MotherlessIE
+from .motorsport import MotorsportIE
+from .movieclips import MovieClipsIE
+from .moviezine import MoviezineIE
+from .movshare import MovShareIE
+from .mtv import (
+    MTVIE,
+    MTVServicesEmbeddedIE,
+    MTVIggyIE,
+)
+from .muenchentv import MuenchenTVIE
+from .musicplayon import MusicPlayOnIE
+from .musicvault import MusicVaultIE
+from .muzu import MuzuTVIE
+from .myspace import MySpaceIE, MySpaceAlbumIE
+from .myspass import MySpassIE
+from .myvi import MyviIE
+from .myvideo import MyVideoIE
+from .myvidster import MyVidsterIE
+from .nationalgeographic import NationalGeographicIE
+from .naver import NaverIE
+from .nba import NBAIE
+from .nbc import (
+    NBCIE,
+    NBCNewsIE,
+    NBCSportsIE,
+    NBCSportsVPlayerIE,
+)
+from .ndr import (
+    NDRIE,
+    NJoyIE,
+)
+from .ndtv import NDTVIE
+from .netzkino import NetzkinoIE
+from .nerdcubed import NerdCubedFeedIE
+from .nerdist import NerdistIE
+from .neteasemusic import (
+    NetEaseMusicIE,
+    NetEaseMusicAlbumIE,
+    NetEaseMusicSingerIE,
+    NetEaseMusicListIE,
+    NetEaseMusicMvIE,
+    NetEaseMusicProgramIE,
+    NetEaseMusicDjRadioIE,
+)
+from .newgrounds import NewgroundsIE
+from .newstube import NewstubeIE
+from .nextmedia import (
+    NextMediaIE,
+    NextMediaActionNewsIE,
+    AppleDailyIE,
+)
+from .nfb import NFBIE
+from .nfl import NFLIE
+from .nhl import (
+    NHLIE,
+    NHLNewsIE,
+    NHLVideocenterIE,
+)
+from .niconico import NiconicoIE, NiconicoPlaylistIE
+from .ninegag import NineGagIE
+from .noco import NocoIE
+from .normalboots import NormalbootsIE
+from .nosvideo import NosVideoIE
+from .nova import NovaIE
+from .novamov import NovaMovIE
+from .nowness import NownessIE
+from .nowtv import NowTVIE
+from .nowvideo import NowVideoIE
+from .npo import (
+    NPOIE,
+    NPOLiveIE,
+    NPORadioIE,
+    NPORadioFragmentIE,
+    VPROIE,
+    WNLIE
+)
+from .nrk import (
+    NRKIE,
+    NRKPlaylistIE,
+    NRKTVIE,
+)
+from .ntvde import NTVDeIE
+from .ntvru import NTVRuIE
+from .nytimes import (
+    NYTimesIE,
+    NYTimesArticleIE,
+)
+from .nuvid import NuvidIE
+from .odnoklassniki import OdnoklassnikiIE
+from .oktoberfesttv import OktoberfestTVIE
+from .onionstudios import OnionStudiosIE
+from .ooyala import (
+    OoyalaIE,
+    OoyalaExternalIE,
+)
+from .openfilm import OpenFilmIE
+from .orf import (
+    ORFTVthekIE,
+    ORFOE1IE,
+    ORFFM4IE,
+    ORFIPTVIE,
+)
+from .parliamentliveuk import ParliamentLiveUKIE
+from .patreon import PatreonIE
+from .pbs import PBSIE
+from .philharmoniedeparis import PhilharmonieDeParisIE
+from .phoenix import PhoenixIE
+from .photobucket import PhotobucketIE
+from .pinkbike import PinkbikeIE
+from .planetaplay import PlanetaPlayIE
+from .pladform import PladformIE
+from .played import PlayedIE
+from .playfm import PlayFMIE
+from .playvid import PlayvidIE
+from .playwire import PlaywireIE
+from .podomatic import PodomaticIE
+from .porn91 import Porn91IE
+from .pornhd import PornHdIE
+from .pornhub import (
+    PornHubIE,
+    PornHubPlaylistIE,
+)
+from .pornotube import PornotubeIE
+from .pornovoisines import PornoVoisinesIE
+from .pornoxo import PornoXOIE
+from .primesharetv import PrimeShareTVIE
+from .promptfile import PromptFileIE
+from .prosiebensat1 import ProSiebenSat1IE
+from .puls4 import Puls4IE
+from .pyvideo import PyvideoIE
+from .qqmusic import (
+    QQMusicIE,
+    QQMusicSingerIE,
+    QQMusicAlbumIE,
+    QQMusicToplistIE,
+    QQMusicPlaylistIE,
+)
+from .quickvid import QuickVidIE
+from .r7 import R7IE
+from .radiode import RadioDeIE
+from .radiojavan import RadioJavanIE
+from .radiobremen import RadioBremenIE
+from .radiofrance import RadioFranceIE
+from .rai import RaiIE
+from .rbmaradio import RBMARadioIE
+from .rds import RDSIE
+from .redtube import RedTubeIE
+from .restudy import RestudyIE
+from .reverbnation import ReverbNationIE
+from .ringtv import RingTVIE
+from .ro220 import Ro220IE
+from .rottentomatoes import RottenTomatoesIE
+from .roxwel import RoxwelIE
+from .rtbf import RTBFIE
+from .rte import RteIE
+from .rtlnl import RtlNlIE
+from .rtl2 import RTL2IE
+from .rtp import RTPIE
+from .rts import RTSIE
+from .rtve import RTVEALaCartaIE, RTVELiveIE, RTVEInfantilIE
+from .ruhd import RUHDIE
+from .rutube import (
+    RutubeIE,
+    RutubeChannelIE,
+    RutubeEmbedIE,
+    RutubeMovieIE,
+    RutubePersonIE,
+)
+from .rutv import RUTVIE
+from .ruutu import RuutuIE
+from .sandia import SandiaIE
+from .safari import (
+    SafariIE,
+    SafariCourseIE,
+)
+from .sapo import SapoIE
+from .savefrom import SaveFromIE
+from .sbs import SBSIE
+from .scivee import SciVeeIE
+from .screencast import ScreencastIE
+from .screencastomatic import ScreencastOMaticIE
+from .screenwavemedia import ScreenwaveMediaIE, TeamFourIE
+from .senateisvp import SenateISVPIE
+from .servingsys import ServingSysIE
+from .sexu import SexuIE
+from .sexykarma import SexyKarmaIE
+from .shared import SharedIE
+from .sharesix import ShareSixIE
+from .sina import SinaIE
+from .slideshare import SlideshareIE
+from .slutload import SlutloadIE
+from .smotri import (
+    SmotriIE,
+    SmotriCommunityIE,
+    SmotriUserIE,
+    SmotriBroadcastIE,
+)
+from .snagfilms import (
+    SnagFilmsIE,
+    SnagFilmsEmbedIE,
+)
+from .snotr import SnotrIE
+from .sohu import SohuIE
+from .soompi import (
+    SoompiIE,
+    SoompiShowIE,
+)
+from .soundcloud import (
+    SoundcloudIE,
+    SoundcloudSetIE,
+    SoundcloudUserIE,
+    SoundcloudPlaylistIE
+)
+from .soundgasm import (
+    SoundgasmIE,
+    SoundgasmProfileIE
+)
+from .southpark import (
+    SouthParkIE,
+    SouthParkDeIE,
+    SouthParkDkIE,
+    SouthParkEsIE,
+    SouthParkNlIE
+)
+from .space import SpaceIE
+from .spankbang import SpankBangIE
+from .spankwire import SpankwireIE
+from .spiegel import SpiegelIE, SpiegelArticleIE
+from .spiegeltv import SpiegeltvIE
+from .spike import SpikeIE
+from .sport5 import Sport5IE
+from .sportbox import (
+    SportBoxIE,
+    SportBoxEmbedIE,
+)
+from .sportdeutschland import SportDeutschlandIE
+from .srf import SrfIE
+from .srmediathek import SRMediathekIE
+from .ssa import SSAIE
+from .stanfordoc import StanfordOpenClassroomIE
+from .steam import SteamIE
+from .streamcloud import StreamcloudIE
+from .streamcz import StreamCZIE
+from .streetvoice import StreetVoiceIE
+from .sunporno import SunPornoIE
+from .svt import (
+    SVTIE,
+    SVTPlayIE,
+)
+from .swrmediathek import SWRMediathekIE
+from .syfy import SyfyIE
+from .sztvhu import SztvHuIE
+from .tagesschau import TagesschauIE
+from .tapely import TapelyIE
+from .tass import TassIE
+from .teachertube import (
+    TeacherTubeIE,
+    TeacherTubeUserIE,
+)
+from .teachingchannel import TeachingChannelIE
+from .teamcoco import TeamcocoIE
+from .techtalks import TechTalksIE
+from .ted import TEDIE
+from .telebruxelles import TeleBruxellesIE
+from .telecinco import TelecincoIE
+from .telemb import TeleMBIE
+from .teletask import TeleTaskIE
+from .tenplay import TenPlayIE
+from .testurl import TestURLIE
+from .testtube import TestTubeIE
+from .tf1 import TF1IE
+from .theonion import TheOnionIE
+from .theplatform import ThePlatformIE
+from .thesixtyone import TheSixtyOneIE
+from .thisamericanlife import ThisAmericanLifeIE
+from .thisav import ThisAVIE
+from .tinypic import TinyPicIE
+from .tlc import TlcIE, TlcDeIE
+from .tmz import (
+    TMZIE,
+    TMZArticleIE,
+)
+from .tnaflix import (
+    TNAFlixIE,
+    EMPFlixIE,
+    MovieFapIE,
+)
+from .thvideo import (
+    THVideoIE,
+    THVideoPlaylistIE
+)
+from .toutv import TouTvIE
+from .toypics import ToypicsUserIE, ToypicsIE
+from .traileraddict import TrailerAddictIE
+from .trilulilu import TriluliluIE
+from .trutube import TruTubeIE
+from .tube8 import Tube8IE
+from .tubitv import TubiTvIE
+from .tudou import TudouIE
+from .tumblr import TumblrIE
+from .tunein import TuneInIE
+from .turbo import TurboIE
+from .tutv import TutvIE
+from .tv2 import (
+    TV2IE,
+    TV2ArticleIE,
+)
+from .tv4 import TV4IE
+from .tvc import (
+    TVCIE,
+    TVCArticleIE,
+)
+from .tvigle import TvigleIE
+from .tvp import TvpIE, TvpSeriesIE
+from .tvplay import TVPlayIE
+from .tweakers import TweakersIE
+from .twentyfourvideo import TwentyFourVideoIE
+from .twentytwotracks import (
+    TwentyTwoTracksIE,
+    TwentyTwoTracksGenreIE
+)
+from .twitch import (
+    TwitchVideoIE,
+    TwitchChapterIE,
+    TwitchVodIE,
+    TwitchProfileIE,
+    TwitchPastBroadcastsIE,
+    TwitchBookmarksIE,
+    TwitchStreamIE,
+)
+from .twitter import TwitterCardIE
+from .ubu import UbuIE
+from .udemy import (
+    UdemyIE,
+    UdemyCourseIE
+)
+from .udn import UDNEmbedIE
+from .ultimedia import UltimediaIE
+from .unistra import UnistraIE
+from .urort import UrortIE
+from .ustream import UstreamIE, UstreamChannelIE
+from .varzesh3 import Varzesh3IE
+from .vbox7 import Vbox7IE
+from .veehd import VeeHDIE
+from .veoh import VeohIE
+from .vessel import VesselIE
+from .vesti import VestiIE
+from .vevo import VevoIE
+from .vgtv import (
+    BTArticleIE,
+    BTVestlendingenIE,
+    VGTVIE,
+)
+from .vh1 import VH1IE
+from .vice import ViceIE
+from .viddler import ViddlerIE
+from .videobam import VideoBamIE
+from .videodetective import VideoDetectiveIE
+from .videolecturesnet import VideoLecturesNetIE
+from .videofyme import VideofyMeIE
+from .videomega import VideoMegaIE
+from .videopremium import VideoPremiumIE
+from .videott import VideoTtIE
+from .videoweed import VideoWeedIE
+from .vidme import VidmeIE
+from .vidzi import VidziIE
+from .vier import VierIE, VierVideosIE
+from .viewster import ViewsterIE
+from .vimeo import (
+    VimeoIE,
+    VimeoAlbumIE,
+    VimeoChannelIE,
+    VimeoGroupsIE,
+    VimeoLikesIE,
+    VimeoReviewIE,
+    VimeoUserIE,
+    VimeoWatchLaterIE,
+)
+from .vimple import VimpleIE
+from .vine import (
+    VineIE,
+    VineUserIE,
+)
+from .viki import (
+    VikiIE,
+    VikiChannelIE,
+)
+from .vk import (
+    VKIE,
+    VKUserVideosIE,
+)
+from .vodlocker import VodlockerIE
+from .voicerepublic import VoiceRepublicIE
+from .vporn import VpornIE
+from .vrt import VRTIE
+from .vube import VubeIE
+from .vuclip import VuClipIE
+from .vulture import VultureIE
+from .walla import WallaIE
+from .washingtonpost import WashingtonPostIE
+from .wat import WatIE
+from .wayofthemaster import WayOfTheMasterIE
+from .wdr import (
+    WDRIE,
+    WDRMobileIE,
+    WDRMausIE,
+)
+from .webofstories import (
+    WebOfStoriesIE,
+    WebOfStoriesPlaylistIE,
+)
+from .weibo import WeiboIE
+from .wimp import WimpIE
+from .wistia import WistiaIE
+from .worldstarhiphop import WorldStarHipHopIE
+from .wrzuta import WrzutaIE
+from .wsj import WSJIE
+from .xbef import XBefIE
+from .xboxclips import XboxClipsIE
+from .xhamster import (
+    XHamsterIE,
+    XHamsterEmbedIE,
+)
+from .xminus import XMinusIE
+from .xnxx import XNXXIE
+from .xstream import XstreamIE
+from .xtube import XTubeUserIE, XTubeIE
+from .xuite import XuiteIE
+from .xvideos import XVideosIE
+from .xxxymovies import XXXYMoviesIE
+from .yahoo import (
+    YahooIE,
+    YahooSearchIE,
+)
+from .yam import YamIE
+from .yandexmusic import (
+    YandexMusicTrackIE,
+    YandexMusicAlbumIE,
+    YandexMusicPlaylistIE,
+)
+from .yesjapan import YesJapanIE
+from .yinyuetai import YinYueTaiIE
+from .ynet import YnetIE
+from .youjizz import YouJizzIE
+from .youku import YoukuIE
+from .youporn import YouPornIE
+from .yourupload import YourUploadIE
+from .youtube import (
+    YoutubeIE,
+    YoutubeChannelIE,
+    YoutubeFavouritesIE,
+    YoutubeHistoryIE,
+    YoutubePlaylistIE,
+    YoutubeRecommendedIE,
+    YoutubeSearchDateIE,
+    YoutubeSearchIE,
+    YoutubeSearchURLIE,
+    YoutubeShowIE,
+    YoutubeSubscriptionsIE,
+    YoutubeTruncatedIDIE,
+    YoutubeTruncatedURLIE,
+    YoutubeUserIE,
+    YoutubeWatchLaterIE,
+)
+from .zapiks import ZapiksIE
+from .zdf import ZDFIE, ZDFChannelIE
+from .zingmp3 import (
+    ZingMp3SongIE,
+    ZingMp3AlbumIE,
+)
+
+_ALL_CLASSES = [
+    klass
+    for name, klass in globals().items()
+    if name.endswith('IE') and name != 'GenericIE'
+]
+_ALL_CLASSES.append(GenericIE)
+
+
+def gen_extractors():
+    """ Return a list of an instance of every supported extractor.
+    The order does matter; the first extractor matched is the one handling the URL.
+    """
+    return [klass() for klass in _ALL_CLASSES]
+
+
+def list_extractors(age_limit):
+    """
+    Return a list of extractors that are suitable for the given age,
+    sorted by extractor ID.
+    """
+
+    return sorted(
+        filter(lambda ie: ie.is_suitable(age_limit), gen_extractors()),
+        key=lambda ie: ie.IE_NAME.lower())
+
+
+def get_info_extractor(ie_name):
+    """Returns the info extractor class with the given ie_name"""
+    return globals()[ie_name + 'IE']

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/abc.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/abc.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/abc.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,47 @@
+from __future__ import unicode_literals
+
+import re
+import json
+
+from .common import InfoExtractor
+
+
+class ABCIE(InfoExtractor):
+    IE_NAME = 'abc.net.au'
+    _VALID_URL = r'http://www\.abc\.net\.au/news/[^/]+/[^/]+/(?P<id>\d+)'
+
+    _TEST = {
+        'url': 'http://www.abc.net.au/news/2014-11-05/australia-to-staff-ebola-treatment-centre-in-sierra-leone/5868334',
+        'md5': 'cb3dd03b18455a661071ee1e28344d9f',
+        'info_dict': {
+            'id': '5868334',
+            'ext': 'mp4',
+            'title': 'Australia to help staff Ebola treatment centre in Sierra Leone',
+            'description': 'md5:809ad29c67a05f54eb41f2a105693a67',
+        },
+    }
+
+    def _real_extract(self, url):
+        video_id = self._match_id(url)
+        webpage = self._download_webpage(url, video_id)
+
+        urls_info_json = self._search_regex(
+            r'inlineVideoData\.push\((.*?)\);', webpage, 'video urls',
+            flags=re.DOTALL)
+        urls_info = json.loads(urls_info_json.replace('\'', '"'))
+        formats = [{
+            'url': url_info['url'],
+            'width': int(url_info['width']),
+            'height': int(url_info['height']),
+            'tbr': int(url_info['bitrate']),
+            'filesize': int(url_info['filesize']),
+        } for url_info in urls_info]
+        self._sort_formats(formats)
+
+        return {
+            'id': video_id,
+            'title': self._og_search_title(webpage),
+            'formats': formats,
+            'description': self._og_search_description(webpage),
+            'thumbnail': self._og_search_thumbnail(webpage),
+        }

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/abc7news.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/abc7news.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/abc7news.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,68 @@
+from __future__ import unicode_literals
+
+import re
+
+from .common import InfoExtractor
+from ..utils import parse_iso8601
+
+
+class Abc7NewsIE(InfoExtractor):
+    _VALID_URL = r'https?://abc7news\.com(?:/[^/]+/(?P<display_id>[^/]+))?/(?P<id>\d+)'
+    _TESTS = [
+        {
+            'url': 'http://abc7news.com/entertainment/east-bay-museum-celebrates-vintage-synthesizers/472581/',
+            'info_dict': {
+                'id': '472581',
+                'display_id': 'east-bay-museum-celebrates-vintage-synthesizers',
+                'ext': 'mp4',
+                'title': 'East Bay museum celebrates history of synthesized music',
+                'description': 'md5:a4f10fb2f2a02565c1749d4adbab4b10',
+                'thumbnail': 're:^https?://.*\.jpg$',
+                'timestamp': 1421123075,
+                'upload_date': '20150113',
+                'uploader': 'Jonathan Bloom',
+            },
+            'params': {
+                # m3u8 download
+                'skip_download': True,
+            },
+        },
+        {
+            'url': 'http://abc7news.com/472581',
+            'only_matching': True,
+        },
+    ]
+
+    def _real_extract(self, url):
+        mobj = re.match(self._VALID_URL, url)
+        video_id = mobj.group('id')
+        display_id = mobj.group('display_id') or video_id
+
+        webpage = self._download_webpage(url, display_id)
+
+        m3u8 = self._html_search_meta(
+            'contentURL', webpage, 'm3u8 url', fatal=True)
+
+        formats = self._extract_m3u8_formats(m3u8, display_id, 'mp4')
+        self._sort_formats(formats)
+
+        title = self._og_search_title(webpage).strip()
+        description = self._og_search_description(webpage).strip()
+        thumbnail = self._og_search_thumbnail(webpage)
+        timestamp = parse_iso8601(self._search_regex(
+            r'<div class="meta">\s*<time class="timeago" datetime="([^"]+)">',
+            webpage, 'upload date', fatal=False))
+        uploader = self._search_regex(
+            r'rel="author">([^<]+)</a>',
+            webpage, 'uploader', default=None)
+
+        return {
+            'id': video_id,
+            'display_id': display_id,
+            'title': title,
+            'description': description,
+            'thumbnail': thumbnail,
+            'timestamp': timestamp,
+            'uploader': uploader,
+            'formats': formats,
+        }

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/academicearth.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/academicearth.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/academicearth.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,41 @@
+from __future__ import unicode_literals
+
+import re
+
+from .common import InfoExtractor
+
+
+class AcademicEarthCourseIE(InfoExtractor):
+    _VALID_URL = r'^https?://(?:www\.)?academicearth\.org/playlists/(?P<id>[^?#/]+)'
+    IE_NAME = 'AcademicEarth:Course'
+    _TEST = {
+        'url': 'http://academicearth.org/playlists/laws-of-nature/',
+        'info_dict': {
+            'id': 'laws-of-nature',
+            'title': 'Laws of Nature',
+            'description': 'Introduce yourself to the laws of nature with these free online college lectures from Yale, Harvard, and MIT.',
+        },
+        'playlist_count': 4,
+    }
+
+    def _real_extract(self, url):
+        playlist_id = self._match_id(url)
+
+        webpage = self._download_webpage(url, playlist_id)
+        title = self._html_search_regex(
+            r'<h1 class="playlist-name"[^>]*?>(.*?)</h1>', webpage, 'title')
+        description = self._html_search_regex(
+            r'<p class="excerpt"[^>]*?>(.*?)</p>',
+            webpage, 'description', fatal=False)
+        urls = re.findall(
+            r'<li class="lecture-preview">\s*?<a target="_blank" href="([^"]+)">',
+            webpage)
+        entries = [self.url_result(u) for u in urls]
+
+        return {
+            '_type': 'playlist',
+            'id': playlist_id,
+            'title': title,
+            'description': description,
+            'entries': entries,
+        }

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/addanime.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/addanime.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/addanime.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,94 @@
+from __future__ import unicode_literals
+
+import re
+
+from .common import InfoExtractor
+from ..compat import (
+    compat_HTTPError,
+    compat_str,
+    compat_urllib_parse,
+    compat_urllib_parse_urlparse,
+)
+from ..utils import (
+    ExtractorError,
+    qualities,
+)
+
+
+class AddAnimeIE(InfoExtractor):
+    _VALID_URL = r'http://(?:\w+\.)?add-anime\.net/(?:watch_video\.php\?(?:.*?)v=|video/)(?P<id>[\w_]+)'
+    _TESTS = [{
+        'url': 'http://www.add-anime.net/watch_video.php?v=24MR3YO5SAS9',
+        'md5': '72954ea10bc979ab5e2eb288b21425a0',
+        'info_dict': {
+            'id': '24MR3YO5SAS9',
+            'ext': 'mp4',
+            'description': 'One Piece 606',
+            'title': 'One Piece 606',
+        }
+    }, {
+        'url': 'http://add-anime.net/video/MDUGWYKNGBD8/One-Piece-687',
+        'only_matching': True,
+    }]
+
+    def _real_extract(self, url):
+        video_id = self._match_id(url)
+
+        try:
+            webpage = self._download_webpage(url, video_id)
+        except ExtractorError as ee:
+            if not isinstance(ee.cause, compat_HTTPError) or \
+               ee.cause.code != 503:
+                raise
+
+            redir_webpage = ee.cause.read().decode('utf-8')
+            action = self._search_regex(
+                r'<form id="challenge-form" action="([^"]+)"',
+                redir_webpage, 'Redirect form')
+            vc = self._search_regex(
+                r'<input type="hidden" name="jschl_vc" value="([^"]+)"/>',
+                redir_webpage, 'redirect vc value')
+            av = re.search(
+                r'a\.value = ([0-9]+)[+]([0-9]+)[*]([0-9]+);',
+                redir_webpage)
+            if av is None:
+                raise ExtractorError('Cannot find redirect math task')
+            av_res = int(av.group(1)) + int(av.group(2)) * int(av.group(3))
+
+            parsed_url = compat_urllib_parse_urlparse(url)
+            av_val = av_res + len(parsed_url.netloc)
+            confirm_url = (
+                parsed_url.scheme + '://' + parsed_url.netloc +
+                action + '?' +
+                compat_urllib_parse.urlencode({
+                    'jschl_vc': vc, 'jschl_answer': compat_str(av_val)}))
+            self._download_webpage(
+                confirm_url, video_id,
+                note='Confirming after redirect')
+            webpage = self._download_webpage(url, video_id)
+
+        FORMATS = ('normal', 'hq')
+        quality = qualities(FORMATS)
+        formats = []
+        for format_id in FORMATS:
+            rex = r"var %s_video_file = '(.*?)';" % re.escape(format_id)
+            video_url = self._search_regex(rex, webpage, 'video file URLx',
+                                           fatal=False)
+            if not video_url:
+                continue
+            formats.append({
+                'format_id': format_id,
+                'url': video_url,
+                'quality': quality(format_id),
+            })
+        self._sort_formats(formats)
+        video_title = self._og_search_title(webpage)
+        video_description = self._og_search_description(webpage)
+
+        return {
+            '_type': 'video',
+            'id': video_id,
+            'formats': formats,
+            'title': video_title,
+            'description': video_description
+        }

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/adobetv.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/adobetv.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/adobetv.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,131 @@
+from __future__ import unicode_literals
+
+from .common import InfoExtractor
+from ..utils import (
+    parse_duration,
+    unified_strdate,
+    str_to_int,
+    float_or_none,
+    ISO639Utils,
+)
+
+
+class AdobeTVIE(InfoExtractor):
+    _VALID_URL = r'https?://tv\.adobe\.com/watch/[^/]+/(?P<id>[^/]+)'
+
+    _TEST = {
+        'url': 'http://tv.adobe.com/watch/the-complete-picture-with-julieanne-kost/quick-tip-how-to-draw-a-circle-around-an-object-in-photoshop/',
+        'md5': '9bc5727bcdd55251f35ad311ca74fa1e',
+        'info_dict': {
+            'id': 'quick-tip-how-to-draw-a-circle-around-an-object-in-photoshop',
+            'ext': 'mp4',
+            'title': 'Quick Tip - How to Draw a Circle Around an Object in Photoshop',
+            'description': 'md5:99ec318dc909d7ba2a1f2b038f7d2311',
+            'thumbnail': 're:https?://.*\.jpg$',
+            'upload_date': '20110914',
+            'duration': 60,
+            'view_count': int,
+        },
+    }
+
+    def _real_extract(self, url):
+        video_id = self._match_id(url)
+        webpage = self._download_webpage(url, video_id)
+
+        player = self._parse_json(
+            self._search_regex(r'html5player:\s*({.+?})\s*\n', webpage, 'player'),
+            video_id)
+
+        title = player.get('title') or self._search_regex(
+            r'data-title="([^"]+)"', webpage, 'title')
+        description = self._og_search_description(webpage)
+        thumbnail = self._og_search_thumbnail(webpage)
+
+        upload_date = unified_strdate(
+            self._html_search_meta('datepublished', webpage, 'upload date'))
+
+        duration = parse_duration(
+            self._html_search_meta('duration', webpage, 'duration') or
+            self._search_regex(
+                r'Runtime:\s*(\d{2}:\d{2}:\d{2})',
+                webpage, 'duration', fatal=False))
+
+        view_count = str_to_int(self._search_regex(
+            r'<div class="views">\s*Views?:\s*([\d,.]+)\s*</div>',
+            webpage, 'view count'))
+
+        formats = [{
+            'url': source['src'],
+            'format_id': source.get('quality') or source['src'].split('-')[-1].split('.')[0] or None,
+            'tbr': source.get('bitrate'),
+        } for source in player['sources']]
+        self._sort_formats(formats)
+
+        return {
+            'id': video_id,
+            'title': title,
+            'description': description,
+            'thumbnail': thumbnail,
+            'upload_date': upload_date,
+            'duration': duration,
+            'view_count': view_count,
+            'formats': formats,
+        }
+
+
+class AdobeTVVideoIE(InfoExtractor):
+    _VALID_URL = r'https?://video\.tv\.adobe\.com/v/(?P<id>\d+)'
+
+    _TEST = {
+        # From https://helpx.adobe.com/acrobat/how-to/new-experience-acrobat-dc.html?set=acrobat--get-started--essential-beginners
+        'url': 'https://video.tv.adobe.com/v/2456/',
+        'md5': '43662b577c018ad707a63766462b1e87',
+        'info_dict': {
+            'id': '2456',
+            'ext': 'mp4',
+            'title': 'New experience with Acrobat DC',
+            'description': 'New experience with Acrobat DC',
+            'duration': 248.667,
+        },
+    }
+
+    def _real_extract(self, url):
+        video_id = self._match_id(url)
+
+        webpage = self._download_webpage(url, video_id)
+
+        player_params = self._parse_json(self._search_regex(
+            r'var\s+bridge\s*=\s*([^;]+);', webpage, 'player parameters'),
+            video_id)
+
+        formats = [{
+            'url': source['src'],
+            'width': source.get('width'),
+            'height': source.get('height'),
+            'tbr': source.get('bitrate'),
+        } for source in player_params['sources']]
+
+        # For both metadata and downloaded files the duration varies among
+        # formats. I just pick the max one
+        duration = max(filter(None, [
+            float_or_none(source.get('duration'), scale=1000)
+            for source in player_params['sources']]))
+
+        subtitles = {}
+        for translation in player_params.get('translations', []):
+            lang_id = translation.get('language_w3c') or ISO639Utils.long2short(translation['language_medium'])
+            if lang_id not in subtitles:
+                subtitles[lang_id] = []
+            subtitles[lang_id].append({
+                'url': translation['vttPath'],
+                'ext': 'vtt',
+            })
+
+        return {
+            'id': video_id,
+            'formats': formats,
+            'title': player_params['title'],
+            'description': self._og_search_description(webpage),
+            'duration': duration,
+            'subtitles': subtitles,
+        }

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/adultswim.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/adultswim.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/adultswim.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,191 @@
+# coding: utf-8
+from __future__ import unicode_literals
+
+import re
+
+from .common import InfoExtractor
+from ..utils import (
+    ExtractorError,
+    float_or_none,
+    xpath_text,
+)
+
+
+class AdultSwimIE(InfoExtractor):
+    _VALID_URL = r'https?://(?:www\.)?adultswim\.com/videos/(?P<is_playlist>playlists/)?(?P<show_path>[^/]+)/(?P<episode_path>[^/?#]+)/?'
+
+    _TESTS = [{
+        'url': 'http://adultswim.com/videos/rick-and-morty/pilot',
+        'playlist': [
+            {
+                'md5': '247572debc75c7652f253c8daa51a14d',
+                'info_dict': {
+                    'id': 'rQxZvXQ4ROaSOqq-or2Mow-0',
+                    'ext': 'flv',
+                    'title': 'Rick and Morty - Pilot Part 1',
+                    'description': "Rick moves in with his daughter's family and establishes himself as a bad influence on his grandson, Morty. "
+                },
+            },
+            {
+                'md5': '77b0e037a4b20ec6b98671c4c379f48d',
+                'info_dict': {
+                    'id': 'rQxZvXQ4ROaSOqq-or2Mow-3',
+                    'ext': 'flv',
+                    'title': 'Rick and Morty - Pilot Part 4',
+                    'description': "Rick moves in with his daughter's family and establishes himself as a bad influence on his grandson, Morty. "
+                },
+            },
+        ],
+        'info_dict': {
+            'id': 'rQxZvXQ4ROaSOqq-or2Mow',
+            'title': 'Rick and Morty - Pilot',
+            'description': "Rick moves in with his daughter's family and establishes himself as a bad influence on his grandson, Morty. "
+        }
+    }, {
+        'url': 'http://www.adultswim.com/videos/playlists/american-parenting/putting-francine-out-of-business/',
+        'playlist': [
+            {
+                'md5': '2eb5c06d0f9a1539da3718d897f13ec5',
+                'info_dict': {
+                    'id': '-t8CamQlQ2aYZ49ItZCFog-0',
+                    'ext': 'flv',
+                    'title': 'American Dad - Putting Francine Out of Business',
+                    'description': 'Stan hatches a plan to get Francine out of the real estate business.Watch more American Dad on [adult swim].'
+                },
+            }
+        ],
+        'info_dict': {
+            'id': '-t8CamQlQ2aYZ49ItZCFog',
+            'title': 'American Dad - Putting Francine Out of Business',
+            'description': 'Stan hatches a plan to get Francine out of the real estate business.Watch more American Dad on [adult swim].'
+        },
+    }, {
+        'url': 'http://www.adultswim.com/videos/tim-and-eric-awesome-show-great-job/dr-steve-brule-for-your-wine/',
+        'playlist': [
+            {
+                'md5': '3e346a2ab0087d687a05e1e7f3b3e529',
+                'info_dict': {
+                    'id': 'sY3cMUR_TbuE4YmdjzbIcQ-0',
+                    'ext': 'flv',
+                    'title': 'Tim and Eric Awesome Show Great Job! - Dr. Steve Brule, For Your Wine',
+                    'description': 'Dr. Brule reports live from Wine Country with a special report on wines.  \r\nWatch Tim and Eric Awesome Show Great Job! episode #20, "Embarrassed" on Adult Swim.\r\n\r\n',
+                },
+            }
+        ],
+        'info_dict': {
+            'id': 'sY3cMUR_TbuE4YmdjzbIcQ',
+            'title': 'Tim and Eric Awesome Show Great Job! - Dr. Steve Brule, For Your Wine',
+            'description': 'Dr. Brule reports live from Wine Country with a special report on wines.  \r\nWatch Tim and Eric Awesome Show Great Job! episode #20, "Embarrassed" on Adult Swim.\r\n\r\n',
+        },
+    }]
+
+    @staticmethod
+    def find_video_info(collection, slug):
+        for video in collection.get('videos'):
+            if video.get('slug') == slug:
+                return video
+
+    @staticmethod
+    def find_collection_by_linkURL(collections, linkURL):
+        for collection in collections:
+            if collection.get('linkURL') == linkURL:
+                return collection
+
+    @staticmethod
+    def find_collection_containing_video(collections, slug):
+        for collection in collections:
+            for video in collection.get('videos'):
+                if video.get('slug') == slug:
+                    return collection, video
+        return None, None
+
+    def _real_extract(self, url):
+        mobj = re.match(self._VALID_URL, url)
+        show_path = mobj.group('show_path')
+        episode_path = mobj.group('episode_path')
+        is_playlist = True if mobj.group('is_playlist') else False
+
+        webpage = self._download_webpage(url, episode_path)
+
+        # Extract the value of `bootstrappedData` from the Javascript in the page.
+        bootstrapped_data = self._parse_json(self._search_regex(
+            r'var bootstrappedData = ({.*});', webpage, 'bootstraped data'), episode_path)
+
+        # Downloading videos from a /videos/playlist/ URL needs to be handled differently.
+        # NOTE: We are only downloading one video (the current one) not the playlist
+        if is_playlist:
+            collections = bootstrapped_data['playlists']['collections']
+            collection = self.find_collection_by_linkURL(collections, show_path)
+            video_info = self.find_video_info(collection, episode_path)
+
+            show_title = video_info['showTitle']
+            segment_ids = [video_info['videoPlaybackID']]
+        else:
+            collections = bootstrapped_data['show']['collections']
+            collection, video_info = self.find_collection_containing_video(collections, episode_path)
+
+            # Video wasn't found in the collections, let's try `slugged_video`.
+            if video_info is None:
+                if bootstrapped_data.get('slugged_video', {}).get('slug') == episode_path:
+                    video_info = bootstrapped_data['slugged_video']
+                else:
+                    raise ExtractorError('Unable to find video info')
+
+            show = bootstrapped_data['show']
+            show_title = show['title']
+            segment_ids = [clip['videoPlaybackID'] for clip in video_info['clips']]
+
+        episode_id = video_info['id']
+        episode_title = video_info['title']
+        episode_description = video_info['description']
+        episode_duration = video_info.get('duration')
+
+        entries = []
+        for part_num, segment_id in enumerate(segment_ids):
+            segment_url = 'http://www.adultswim.com/videos/api/v0/assets?id=%s&platform=mobile' % segment_id
+
+            segment_title = '%s - %s' % (show_title, episode_title)
+            if len(segment_ids) > 1:
+                segment_title += ' Part %d' % (part_num + 1)
+
+            idoc = self._download_xml(
+                segment_url, segment_title,
+                'Downloading segment information', 'Unable to download segment information')
+
+            segment_duration = float_or_none(
+                xpath_text(idoc, './/trt', 'segment duration').strip())
+
+            formats = []
+            file_els = idoc.findall('.//files/file')
+
+            for file_el in file_els:
+                bitrate = file_el.attrib.get('bitrate')
+                ftype = file_el.attrib.get('type')
+
+                formats.append({
+                    'format_id': '%s_%s' % (bitrate, ftype),
+                    'url': file_el.text.strip(),
+                    # The bitrate may not be a number (for example: 'iphone')
+                    'tbr': int(bitrate) if bitrate.isdigit() else None,
+                    'quality': 1 if ftype == 'hd' else -1
+                })
+
+            self._sort_formats(formats)
+
+            entries.append({
+                'id': segment_id,
+                'title': segment_title,
+                'formats': formats,
+                'duration': segment_duration,
+                'description': episode_description
+            })
+
+        return {
+            '_type': 'playlist',
+            'id': episode_id,
+            'display_id': episode_path,
+            'entries': entries,
+            'title': '%s - %s' % (show_title, episode_title),
+            'description': episode_description,
+            'duration': episode_duration
+        }

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/aftenposten.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/aftenposten.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/aftenposten.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,23 @@
+# coding: utf-8
+from __future__ import unicode_literals
+
+from .common import InfoExtractor
+
+
+class AftenpostenIE(InfoExtractor):
+    _VALID_URL = r'https?://(?:www\.)?aftenposten\.no/webtv/(?:#!/)?video/(?P<id>\d+)'
+    _TEST = {
+        'url': 'http://www.aftenposten.no/webtv/#!/video/21039/trailer-sweatshop-i-can-t-take-any-more',
+        'md5': 'fd828cd29774a729bf4d4425fe192972',
+        'info_dict': {
+            'id': '21039',
+            'ext': 'mov',
+            'title': 'TRAILER: "Sweatshop" - I can´t take any more',
+            'description': 'md5:21891f2b0dd7ec2f78d84a50e54f8238',
+            'timestamp': 1416927969,
+            'upload_date': '20141125',
+        }
+    }
+
+    def _real_extract(self, url):
+        return self.url_result('xstream:ap:%s' % self._match_id(url), 'Xstream')

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/aftonbladet.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/aftonbladet.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/aftonbladet.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,64 @@
+# encoding: utf-8
+from __future__ import unicode_literals
+
+from .common import InfoExtractor
+from ..utils import int_or_none
+
+
+class AftonbladetIE(InfoExtractor):
+    _VALID_URL = r'http://tv\.aftonbladet\.se/abtv/articles/(?P<id>[0-9]+)'
+    _TEST = {
+        'url': 'http://tv.aftonbladet.se/abtv/articles/36015',
+        'info_dict': {
+            'id': '36015',
+            'ext': 'mp4',
+            'title': 'Vulkanutbrott i rymden - nu släpper NASA bilderna',
+            'description': 'Jupiters måne mest aktiv av alla himlakroppar',
+            'timestamp': 1394142732,
+            'upload_date': '20140306',
+        },
+    }
+
+    def _real_extract(self, url):
+        video_id = self._match_id(url)
+        webpage = self._download_webpage(url, video_id)
+
+        # find internal video meta data
+        meta_url = 'http://aftonbladet-play.drlib.aptoma.no/video/%s.json'
+        player_config = self._parse_json(self._html_search_regex(
+            r'data-player-config="([^"]+)"', webpage, 'player config'), video_id)
+        internal_meta_id = player_config['videoId']
+        internal_meta_url = meta_url % internal_meta_id
+        internal_meta_json = self._download_json(
+            internal_meta_url, video_id, 'Downloading video meta data')
+
+        # find internal video formats
+        format_url = 'http://aftonbladet-play.videodata.drvideo.aptoma.no/actions/video/?id=%s'
+        internal_video_id = internal_meta_json['videoId']
+        internal_formats_url = format_url % internal_video_id
+        internal_formats_json = self._download_json(
+            internal_formats_url, video_id, 'Downloading video formats')
+
+        formats = []
+        for fmt in internal_formats_json['formats']['http']['pseudostreaming']['mp4']:
+            p = fmt['paths'][0]
+            formats.append({
+                'url': 'http://%s:%d/%s/%s' % (p['address'], p['port'], p['path'], p['filename']),
+                'ext': 'mp4',
+                'width': int_or_none(fmt.get('width')),
+                'height': int_or_none(fmt.get('height')),
+                'tbr': int_or_none(fmt.get('bitrate')),
+                'protocol': 'http',
+            })
+        self._sort_formats(formats)
+
+        return {
+            'id': video_id,
+            'title': internal_meta_json['title'],
+            'formats': formats,
+            'thumbnail': internal_meta_json.get('imageUrl'),
+            'description': internal_meta_json.get('shortPreamble'),
+            'timestamp': int_or_none(internal_meta_json.get('timePublished')),
+            'duration': int_or_none(internal_meta_json.get('duration')),
+            'view_count': int_or_none(internal_meta_json.get('views')),
+        }

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/airmozilla.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/airmozilla.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/airmozilla.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,74 @@
+# coding: utf-8
+from __future__ import unicode_literals
+
+import re
+
+from .common import InfoExtractor
+from ..utils import (
+    int_or_none,
+    parse_duration,
+    parse_iso8601,
+)
+
+
+class AirMozillaIE(InfoExtractor):
+    _VALID_URL = r'https?://air\.mozilla\.org/(?P<id>[0-9a-z-]+)/?'
+    _TEST = {
+        'url': 'https://air.mozilla.org/privacy-lab-a-meetup-for-privacy-minded-people-in-san-francisco/',
+        'md5': '2e3e7486ba5d180e829d453875b9b8bf',
+        'info_dict': {
+            'id': '6x4q2w',
+            'ext': 'mp4',
+            'title': 'Privacy Lab - a meetup for privacy minded people in San Francisco',
+            'thumbnail': 're:https://\w+\.cloudfront\.net/6x4q2w/poster\.jpg\?t=\d+',
+            'description': 'Brings together privacy professionals and others interested in privacy at for-profits, non-profits, and NGOs in an effort to contribute to the state of the ecosystem...',
+            'timestamp': 1422487800,
+            'upload_date': '20150128',
+            'location': 'SFO Commons',
+            'duration': 3780,
+            'view_count': int,
+            'categories': ['Main'],
+        }
+    }
+
+    def _real_extract(self, url):
+        display_id = self._match_id(url)
+        webpage = self._download_webpage(url, display_id)
+        video_id = self._html_search_regex(r'//vid.ly/(.*?)/embed', webpage, 'id')
+
+        embed_script = self._download_webpage('https://vid.ly/{0}/embed'.format(video_id), video_id)
+        jwconfig = self._search_regex(r'\svar jwconfig = (\{.*?\});\s', embed_script, 'metadata')
+        metadata = self._parse_json(jwconfig, video_id)
+
+        formats = [{
+            'url': source['file'],
+            'ext': source['type'],
+            'format_id': self._search_regex(r'&format=(.*)$', source['file'], 'video format'),
+            'format': source['label'],
+            'height': int(source['label'].rstrip('p')),
+        } for source in metadata['playlist'][0]['sources']]
+        self._sort_formats(formats)
+
+        view_count = int_or_none(self._html_search_regex(
+            r'Views since archived: ([0-9]+)',
+            webpage, 'view count', fatal=False))
+        timestamp = parse_iso8601(self._html_search_regex(
+            r'<time datetime="(.*?)"', webpage, 'timestamp', fatal=False))
+        duration = parse_duration(self._search_regex(
+            r'Duration:\s*(\d+\s*hours?\s*\d+\s*minutes?)',
+            webpage, 'duration', fatal=False))
+
+        return {
+            'id': video_id,
+            'title': self._og_search_title(webpage),
+            'formats': formats,
+            'url': self._og_search_url(webpage),
+            'display_id': display_id,
+            'thumbnail': metadata['playlist'][0].get('image'),
+            'description': self._og_search_description(webpage),
+            'timestamp': timestamp,
+            'location': self._html_search_regex(r'Location: (.*)', webpage, 'location', default=None),
+            'duration': duration,
+            'view_count': view_count,
+            'categories': re.findall(r'<a href=".*?" class="channel">(.*?)</a>', webpage),
+        }

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/aljazeera.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/aljazeera.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/aljazeera.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,35 @@
+from __future__ import unicode_literals
+
+from .common import InfoExtractor
+
+
+class AlJazeeraIE(InfoExtractor):
+    _VALID_URL = r'http://www\.aljazeera\.com/programmes/.*?/(?P<id>[^/]+)\.html'
+
+    _TEST = {
+        'url': 'http://www.aljazeera.com/programmes/the-slum/2014/08/deliverance-201482883754237240.html',
+        'info_dict': {
+            'id': '3792260579001',
+            'ext': 'mp4',
+            'title': 'The Slum - Episode 1: Deliverance',
+            'description': 'As a birth attendant advocating for family planning, Remy is on the frontline of Tondo\'s battle with overcrowding.',
+            'uploader': 'Al Jazeera English',
+        },
+        'add_ie': ['Brightcove'],
+    }
+
+    def _real_extract(self, url):
+        program_name = self._match_id(url)
+        webpage = self._download_webpage(url, program_name)
+        brightcove_id = self._search_regex(
+            r'RenderPagesVideo\(\'(.+?)\'', webpage, 'brightcove id')
+
+        return {
+            '_type': 'url',
+            'url': (
+                'brightcove:'
+                'playerKey=AQ~~%2CAAAAmtVJIFk~%2CTVGOQ5ZTwJbeMWnq5d_H4MOM57xfzApc'
+                '&%40videoPlayer={0}'.format(brightcove_id)
+            ),
+            'ie_key': 'Brightcove',
+        }

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/allocine.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/allocine.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/allocine.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,89 @@
+# -*- coding: utf-8 -*-
+from __future__ import unicode_literals
+
+import re
+import json
+
+from .common import InfoExtractor
+from ..compat import compat_str
+from ..utils import (
+    qualities,
+)
+
+
+class AllocineIE(InfoExtractor):
+    _VALID_URL = r'https?://(?:www\.)?allocine\.fr/(?P<typ>article|video|film)/(fichearticle_gen_carticle=|player_gen_cmedia=|fichefilm_gen_cfilm=|video-)(?P<id>[0-9]+)(?:\.html)?'
+
+    _TESTS = [{
+        'url': 'http://www.allocine.fr/article/fichearticle_gen_carticle=18635087.html',
+        'md5': '0c9fcf59a841f65635fa300ac43d8269',
+        'info_dict': {
+            'id': '19546517',
+            'ext': 'mp4',
+            'title': 'Astérix - Le Domaine des Dieux Teaser VF',
+            'description': 'md5:abcd09ce503c6560512c14ebfdb720d2',
+            'thumbnail': 're:http://.*\.jpg',
+        },
+    }, {
+        'url': 'http://www.allocine.fr/video/player_gen_cmedia=19540403&cfilm=222257.html',
+        'md5': 'd0cdce5d2b9522ce279fdfec07ff16e0',
+        'info_dict': {
+            'id': '19540403',
+            'ext': 'mp4',
+            'title': 'Planes 2 Bande-annonce VF',
+            'description': 'md5:eeaffe7c2d634525e21159b93acf3b1e',
+            'thumbnail': 're:http://.*\.jpg',
+        },
+    }, {
+        'url': 'http://www.allocine.fr/film/fichefilm_gen_cfilm=181290.html',
+        'md5': '101250fb127ef9ca3d73186ff22a47ce',
+        'info_dict': {
+            'id': '19544709',
+            'ext': 'mp4',
+            'title': 'Dragons 2 - Bande annonce finale VF',
+            'description': 'md5:71742e3a74b0d692c7fce0dd2017a4ac',
+            'thumbnail': 're:http://.*\.jpg',
+        },
+    }, {
+        'url': 'http://www.allocine.fr/video/video-19550147/',
+        'only_matching': True,
+    }]
+
+    def _real_extract(self, url):
+        mobj = re.match(self._VALID_URL, url)
+        typ = mobj.group('typ')
+        display_id = mobj.group('id')
+
+        webpage = self._download_webpage(url, display_id)
+
+        if typ == 'film':
+            video_id = self._search_regex(r'href="/video/player_gen_cmedia=([0-9]+).+"', webpage, 'video id')
+        else:
+            player = self._search_regex(r'data-player=\'([^\']+)\'>', webpage, 'data player')
+
+            player_data = json.loads(player)
+            video_id = compat_str(player_data['refMedia'])
+
+        xml = self._download_xml('http://www.allocine.fr/ws/AcVisiondataV4.ashx?media=%s' % video_id, display_id)
+
+        video = xml.find('.//AcVisionVideo').attrib
+        quality = qualities(['ld', 'md', 'hd'])
+
+        formats = []
+        for k, v in video.items():
+            if re.match(r'.+_path', k):
+                format_id = k.split('_')[0]
+                formats.append({
+                    'format_id': format_id,
+                    'quality': quality(format_id),
+                    'url': v,
+                })
+        self._sort_formats(formats)
+
+        return {
+            'id': video_id,
+            'title': video['videoTitle'],
+            'thumbnail': self._og_search_thumbnail(webpage),
+            'formats': formats,
+            'description': self._og_search_description(webpage),
+        }

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/alphaporno.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/alphaporno.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/alphaporno.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,77 @@
+from __future__ import unicode_literals
+
+from .common import InfoExtractor
+from ..utils import (
+    parse_iso8601,
+    parse_duration,
+    parse_filesize,
+    int_or_none,
+)
+
+
+class AlphaPornoIE(InfoExtractor):
+    _VALID_URL = r'https?://(?:www\.)?alphaporno\.com/videos/(?P<id>[^/]+)'
+    _TEST = {
+        'url': 'http://www.alphaporno.com/videos/sensual-striptease-porn-with-samantha-alexandra/',
+        'md5': 'feb6d3bba8848cd54467a87ad34bd38e',
+        'info_dict': {
+            'id': '258807',
+            'display_id': 'sensual-striptease-porn-with-samantha-alexandra',
+            'ext': 'mp4',
+            'title': 'Sensual striptease porn with Samantha Alexandra',
+            'thumbnail': 're:https?://.*\.jpg$',
+            'timestamp': 1418694611,
+            'upload_date': '20141216',
+            'duration': 387,
+            'filesize_approx': 54120000,
+            'tbr': 1145,
+            'categories': list,
+            'age_limit': 18,
+        }
+    }
+
+    def _real_extract(self, url):
+        display_id = self._match_id(url)
+
+        webpage = self._download_webpage(url, display_id)
+
+        video_id = self._search_regex(
+            r"video_id\s*:\s*'([^']+)'", webpage, 'video id', default=None)
+
+        video_url = self._search_regex(
+            r"video_url\s*:\s*'([^']+)'", webpage, 'video url')
+        ext = self._html_search_meta(
+            'encodingFormat', webpage, 'ext', default='.mp4')[1:]
+
+        title = self._search_regex(
+            [r'<meta content="([^"]+)" itemprop="description">',
+             r'class="title" itemprop="name">([^<]+)<'],
+            webpage, 'title')
+        thumbnail = self._html_search_meta('thumbnail', webpage, 'thumbnail')
+        timestamp = parse_iso8601(self._html_search_meta(
+            'uploadDate', webpage, 'upload date'))
+        duration = parse_duration(self._html_search_meta(
+            'duration', webpage, 'duration'))
+        filesize_approx = parse_filesize(self._html_search_meta(
+            'contentSize', webpage, 'file size'))
+        bitrate = int_or_none(self._html_search_meta(
+            'bitrate', webpage, 'bitrate'))
+        categories = self._html_search_meta(
+            'keywords', webpage, 'categories', default='').split(',')
+
+        age_limit = self._rta_search(webpage)
+
+        return {
+            'id': video_id,
+            'display_id': display_id,
+            'url': video_url,
+            'ext': ext,
+            'title': title,
+            'thumbnail': thumbnail,
+            'timestamp': timestamp,
+            'duration': duration,
+            'filesize_approx': filesize_approx,
+            'tbr': bitrate,
+            'categories': categories,
+            'age_limit': age_limit,
+        }

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/anitube.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/anitube.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/anitube.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,59 @@
+from __future__ import unicode_literals
+
+import re
+
+from .common import InfoExtractor
+
+
+class AnitubeIE(InfoExtractor):
+    IE_NAME = 'anitube.se'
+    _VALID_URL = r'https?://(?:www\.)?anitube\.se/video/(?P<id>\d+)'
+
+    _TEST = {
+        'url': 'http://www.anitube.se/video/36621',
+        'md5': '59d0eeae28ea0bc8c05e7af429998d43',
+        'info_dict': {
+            'id': '36621',
+            'ext': 'mp4',
+            'title': 'Recorder to Randoseru 01',
+            'duration': 180.19,
+        },
+        'skip': 'Blocked in the US',
+    }
+
+    def _real_extract(self, url):
+        mobj = re.match(self._VALID_URL, url)
+        video_id = mobj.group('id')
+
+        webpage = self._download_webpage(url, video_id)
+        key = self._html_search_regex(
+            r'http://www\.anitube\.se/embed/([A-Za-z0-9_-]*)', webpage, 'key')
+
+        config_xml = self._download_xml(
+            'http://www.anitube.se/nuevo/econfig.php?key=%s' % key, key)
+
+        video_title = config_xml.find('title').text
+        thumbnail = config_xml.find('image').text
+        duration = float(config_xml.find('duration').text)
+
+        formats = []
+        video_url = config_xml.find('file')
+        if video_url is not None:
+            formats.append({
+                'format_id': 'sd',
+                'url': video_url.text,
+            })
+        video_url = config_xml.find('filehd')
+        if video_url is not None:
+            formats.append({
+                'format_id': 'hd',
+                'url': video_url.text,
+            })
+
+        return {
+            'id': video_id,
+            'title': video_title,
+            'thumbnail': thumbnail,
+            'duration': duration,
+            'formats': formats
+        }

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/anysex.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/anysex.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/anysex.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,61 @@
+from __future__ import unicode_literals
+
+import re
+
+from .common import InfoExtractor
+from ..utils import (
+    parse_duration,
+    int_or_none,
+)
+
+
+class AnySexIE(InfoExtractor):
+    _VALID_URL = r'https?://(?:www\.)?anysex\.com/(?P<id>\d+)'
+    _TEST = {
+        'url': 'http://anysex.com/156592/',
+        'md5': '023e9fbb7f7987f5529a394c34ad3d3d',
+        'info_dict': {
+            'id': '156592',
+            'ext': 'mp4',
+            'title': 'Busty and sexy blondie in her bikini strips for you',
+            'description': 'md5:de9e418178e2931c10b62966474e1383',
+            'categories': ['Erotic'],
+            'duration': 270,
+            'age_limit': 18,
+        }
+    }
+
+    def _real_extract(self, url):
+        mobj = re.match(self._VALID_URL, url)
+        video_id = mobj.group('id')
+
+        webpage = self._download_webpage(url, video_id)
+
+        video_url = self._html_search_regex(r"video_url\s*:\s*'([^']+)'", webpage, 'video URL')
+
+        title = self._html_search_regex(r'<title>(.*?)</title>', webpage, 'title')
+        description = self._html_search_regex(
+            r'<div class="description"[^>]*>([^<]+)</div>', webpage, 'description', fatal=False)
+        thumbnail = self._html_search_regex(
+            r'preview_url\s*:\s*\'(.*?)\'', webpage, 'thumbnail', fatal=False)
+
+        categories = re.findall(
+            r'<a href="http://anysex\.com/categories/[^"]+"; title="[^"]*">([^<]+)</a>', webpage)
+
+        duration = parse_duration(self._search_regex(
+            r'<b>Duration:</b> (?:<q itemprop="duration">)?(\d+:\d+)', webpage, 'duration', fatal=False))
+        view_count = int_or_none(self._html_search_regex(
+            r'<b>Views:</b> (\d+)', webpage, 'view count', fatal=False))
+
+        return {
+            'id': video_id,
+            'url': video_url,
+            'ext': 'mp4',
+            'title': title,
+            'description': description,
+            'thumbnail': thumbnail,
+            'categories': categories,
+            'duration': duration,
+            'view_count': view_count,
+            'age_limit': 18,
+        }

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/aol.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/aol.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/aol.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,70 @@
+from __future__ import unicode_literals
+
+import re
+
+from .common import InfoExtractor
+
+
+class AolIE(InfoExtractor):
+    IE_NAME = 'on.aol.com'
+    _VALID_URL = r'''(?x)
+        (?:
+            aol-video:|
+            http://on\.aol\.com/
+            (?:
+                video/.*-|
+                playlist/(?P<playlist_display_id>[^/?#]+?)-(?P<playlist_id>[0-9]+)[?#].*_videoid=
+            )
+        )
+        (?P<id>[0-9]+)
+        (?:$|\?)
+    '''
+
+    _TESTS = [{
+        'url': 'http://on.aol.com/video/u-s--official-warns-of-largest-ever-irs-phone-scam-518167793?icid=OnHomepageC2Wide_MustSee_Img',
+        'md5': '18ef68f48740e86ae94b98da815eec42',
+        'info_dict': {
+            'id': '518167793',
+            'ext': 'mp4',
+            'title': 'U.S. Official Warns Of \'Largest Ever\' IRS Phone Scam',
+        },
+        'add_ie': ['FiveMin'],
+    }, {
+        'url': 'http://on.aol.com/playlist/brace-yourself---todays-weirdest-news-152147?icid=OnHomepageC4_Omg_Img#_videoid=518184316',
+        'info_dict': {
+            'id': '152147',
+            'title': 'Brace Yourself - Today\'s Weirdest News',
+        },
+        'playlist_mincount': 10,
+    }]
+
+    def _real_extract(self, url):
+        mobj = re.match(self._VALID_URL, url)
+        video_id = mobj.group('id')
+        playlist_id = mobj.group('playlist_id')
+        if not playlist_id or self._downloader.params.get('noplaylist'):
+            return self.url_result('5min:%s' % video_id)
+
+        self.to_screen('Downloading playlist %s - add --no-playlist to just download video %s' % (playlist_id, video_id))
+
+        webpage = self._download_webpage(url, playlist_id)
+        title = self._html_search_regex(
+            r'<h1 class="video-title[^"]*">(.+?)</h1>', webpage, 'title')
+        playlist_html = self._search_regex(
+            r"(?s)<ul\s+class='video-related[^']*'>(.*?)</ul>", webpage,
+            'playlist HTML')
+        entries = [{
+            '_type': 'url',
+            'url': 'aol-video:%s' % m.group('id'),
+            'ie_key': 'Aol',
+        } for m in re.finditer(
+            r"<a\s+href='.*videoid=(?P<id>[0-9]+)'\s+class='video-thumb'>",
+            playlist_html)]
+
+        return {
+            '_type': 'playlist',
+            'id': playlist_id,
+            'display_id': mobj.group('playlist_display_id'),
+            'title': title,
+            'entries': entries,
+        }

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/aparat.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/aparat.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/aparat.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,60 @@
+# coding: utf-8
+from __future__ import unicode_literals
+
+import re
+
+from .common import InfoExtractor
+from ..utils import (
+    ExtractorError,
+    HEADRequest,
+)
+
+
+class AparatIE(InfoExtractor):
+    _VALID_URL = r'^https?://(?:www\.)?aparat\.com/(?:v/|video/video/embed/videohash/)(?P<id>[a-zA-Z0-9]+)'
+
+    _TEST = {
+        'url': 'http://www.aparat.com/v/wP8On',
+        'md5': '6714e0af7e0d875c5a39c4dc4ab46ad1',
+        'info_dict': {
+            'id': 'wP8On',
+            'ext': 'mp4',
+            'title': 'تیم گلکسی 11 - زومیت',
+            'age_limit': 0,
+        },
+        # 'skip': 'Extremely unreliable',
+    }
+
+    def _real_extract(self, url):
+        video_id = self._match_id(url)
+
+        # Note: There is an easier-to-parse configuration at
+        # http://www.aparat.com/video/video/config/videohash/%video_id
+        # but the URL in there does not work
+        embed_url = ('http://www.aparat.com/video/video/embed/videohash/' +
+                     video_id + '/vt/frame')
+        webpage = self._download_webpage(embed_url, video_id)
+
+        video_urls = [video_url.replace('\\/', '/') for video_url in re.findall(
+            r'(?:fileList\[[0-9]+\]\s*=|"file"\s*:)\s*"([^"]+)"', webpage)]
+        for i, video_url in enumerate(video_urls):
+            req = HEADRequest(video_url)
+            res = self._request_webpage(
+                req, video_id, note='Testing video URL %d' % i, errnote=False)
+            if res:
+                break
+        else:
+            raise ExtractorError('No working video URLs found')
+
+        title = self._search_regex(r'\s+title:\s*"([^"]+)"', webpage, 'title')
+        thumbnail = self._search_regex(
+            r'image:\s*"([^"]+)"', webpage, 'thumbnail', fatal=False)
+
+        return {
+            'id': video_id,
+            'title': title,
+            'url': video_url,
+            'ext': 'mp4',
+            'thumbnail': thumbnail,
+            'age_limit': self._family_friendly_search(webpage),
+        }

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/appletrailers.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/appletrailers.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/appletrailers.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,146 @@
+from __future__ import unicode_literals
+
+import re
+import json
+
+from .common import InfoExtractor
+from ..compat import compat_urlparse
+from ..utils import (
+    int_or_none,
+)
+
+
+class AppleTrailersIE(InfoExtractor):
+    _VALID_URL = r'https?://(?:www\.)?trailers\.apple\.com/(?:trailers|ca)/(?P<company>[^/]+)/(?P<movie>[^/]+)'
+    _TESTS = [{
+        "url": "http://trailers.apple.com/trailers/wb/manofsteel/";,
+        'info_dict': {
+            'id': 'manofsteel',
+        },
+        "playlist": [
+            {
+                "md5": "d97a8e575432dbcb81b7c3acb741f8a8",
+                "info_dict": {
+                    "id": "manofsteel-trailer4",
+                    "ext": "mov",
+                    "duration": 111,
+                    "title": "Trailer 4",
+                    "upload_date": "20130523",
+                    "uploader_id": "wb",
+                },
+            },
+            {
+                "md5": "b8017b7131b721fb4e8d6f49e1df908c",
+                "info_dict": {
+                    "id": "manofsteel-trailer3",
+                    "ext": "mov",
+                    "duration": 182,
+                    "title": "Trailer 3",
+                    "upload_date": "20130417",
+                    "uploader_id": "wb",
+                },
+            },
+            {
+                "md5": "d0f1e1150989b9924679b441f3404d48",
+                "info_dict": {
+                    "id": "manofsteel-trailer",
+                    "ext": "mov",
+                    "duration": 148,
+                    "title": "Trailer",
+                    "upload_date": "20121212",
+                    "uploader_id": "wb",
+                },
+            },
+            {
+                "md5": "5fe08795b943eb2e757fa95cb6def1cb",
+                "info_dict": {
+                    "id": "manofsteel-teaser",
+                    "ext": "mov",
+                    "duration": 93,
+                    "title": "Teaser",
+                    "upload_date": "20120721",
+                    "uploader_id": "wb",
+                },
+            },
+        ]
+    }, {
+        'url': 'http://trailers.apple.com/ca/metropole/autrui/',
+        'only_matching': True,
+    }]
+
+    _JSON_RE = r'iTunes.playURL\((.*?)\);'
+
+    def _real_extract(self, url):
+        mobj = re.match(self._VALID_URL, url)
+        movie = mobj.group('movie')
+        uploader_id = mobj.group('company')
+
+        playlist_url = compat_urlparse.urljoin(url, 'includes/playlists/itunes.inc')
+
+        def fix_html(s):
+            s = re.sub(r'(?s)<script[^<]*?>.*?</script>', '', s)
+            s = re.sub(r'<img ([^<]*?)>', r'<img \1/>', s)
+            # The ' in the onClick attributes are not escaped, it couldn't be parsed
+            # like: http://trailers.apple.com/trailers/wb/gravity/
+
+            def _clean_json(m):
+                return 'iTunes.playURL(%s);' % m.group(1).replace('\'', '&#39;')
+            s = re.sub(self._JSON_RE, _clean_json, s)
+            s = '<html>%s</html>' % s
+            return s
+        doc = self._download_xml(playlist_url, movie, transform_source=fix_html)
+
+        playlist = []
+        for li in doc.findall('./div/ul/li'):
+            on_click = li.find('.//a').attrib['onClick']
+            trailer_info_json = self._search_regex(self._JSON_RE,
+                                                   on_click, 'trailer info')
+            trailer_info = json.loads(trailer_info_json)
+            title = trailer_info['title']
+            video_id = movie + '-' + re.sub(r'[^a-zA-Z0-9]', '', title).lower()
+            thumbnail = li.find('.//img').attrib['src']
+            upload_date = trailer_info['posted'].replace('-', '')
+
+            runtime = trailer_info['runtime']
+            m = re.search(r'(?P<minutes>[0-9]+):(?P<seconds>[0-9]{1,2})', runtime)
+            duration = None
+            if m:
+                duration = 60 * int(m.group('minutes')) + int(m.group('seconds'))
+
+            first_url = trailer_info['url']
+            trailer_id = first_url.split('/')[-1].rpartition('_')[0].lower()
+            settings_json_url = compat_urlparse.urljoin(url, 'includes/settings/%s.json' % trailer_id)
+            settings = self._download_json(settings_json_url, trailer_id, 'Downloading settings json')
+
+            formats = []
+            for format in settings['metadata']['sizes']:
+                # The src is a file pointing to the real video file
+                format_url = re.sub(r'_(\d*p.mov)', r'_h\1', format['src'])
+                formats.append({
+                    'url': format_url,
+                    'format': format['type'],
+                    'width': int_or_none(format['width']),
+                    'height': int_or_none(format['height']),
+                })
+
+            self._sort_formats(formats)
+
+            playlist.append({
+                '_type': 'video',
+                'id': video_id,
+                'formats': formats,
+                'title': title,
+                'duration': duration,
+                'thumbnail': thumbnail,
+                'upload_date': upload_date,
+                'uploader_id': uploader_id,
+                'http_headers': {
+                    'User-Agent': 'QuickTime compatible (youtube-dl)',
+                },
+            })
+
+        return {
+            '_type': 'playlist',
+            'id': movie,
+            'entries': playlist,
+        }

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/archiveorg.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/archiveorg.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/archiveorg.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,67 @@
+from __future__ import unicode_literals
+
+from .common import InfoExtractor
+from ..utils import unified_strdate
+
+
+class ArchiveOrgIE(InfoExtractor):
+    IE_NAME = 'archive.org'
+    IE_DESC = 'archive.org videos'
+    _VALID_URL = r'https?://(?:www\.)?archive\.org/details/(?P<id>[^?/]+)(?:[?].*)?$'
+    _TESTS = [{
+        'url': 'http://archive.org/details/XD300-23_68HighlightsAResearchCntAugHumanIntellect',
+        'md5': '8af1d4cf447933ed3c7f4871162602db',
+        'info_dict': {
+            'id': 'XD300-23_68HighlightsAResearchCntAugHumanIntellect',
+            'ext': 'ogv',
+            'title': '1968 Demo - FJCC Conference Presentation Reel #1',
+            'description': 'md5:1780b464abaca9991d8968c877bb53ed',
+            'upload_date': '19681210',
+            'uploader': 'SRI International'
+        }
+    }, {
+        'url': 'https://archive.org/details/Cops1922',
+        'md5': '18f2a19e6d89af8425671da1cf3d4e04',
+        'info_dict': {
+            'id': 'Cops1922',
+            'ext': 'ogv',
+            'title': 'Buster Keaton\'s "Cops" (1922)',
+            'description': 'md5:70f72ee70882f713d4578725461ffcc3',
+        }
+    }]
+
+    def _real_extract(self, url):
+        video_id = self._match_id(url)
+
+        json_url = url + ('&' if '?' in url else '?') + 'output=json'
+        data = self._download_json(json_url, video_id)
+
+        def get_optional(data_dict, field):
+            return data_dict['metadata'].get(field, [None])[0]
+
+        title = get_optional(data, 'title')
+        description = get_optional(data, 'description')
+        uploader = get_optional(data, 'creator')
+        upload_date = unified_strdate(get_optional(data, 'date'))
+
+        formats = [
+            {
+                'format': fdata['format'],
+                'url': 'http://' + data['server'] + data['dir'] + fn,
+                'file_size': int(fdata['size']),
+            }
+            for fn, fdata in data['files'].items()
+            if 'Video' in fdata['format']]
+
+        self._sort_formats(formats)
+
+        return {
+            '_type': 'video',
+            'id': video_id,
+            'title': title,
+            'formats': formats,
+            'description': description,
+            'uploader': uploader,
+            'upload_date': upload_date,
+            'thumbnail': data.get('misc', {}).get('image'),
+        }

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/ard.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/ard.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/ard.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,191 @@
+# coding: utf-8
+from __future__ import unicode_literals
+
+import re
+
+from .common import InfoExtractor
+from .generic import GenericIE
+from ..utils import (
+    determine_ext,
+    ExtractorError,
+    qualities,
+    int_or_none,
+    parse_duration,
+    unified_strdate,
+    xpath_text,
+    parse_xml,
+)
+
+
+class ARDMediathekIE(InfoExtractor):
+    IE_NAME = 'ARD:mediathek'
+    _VALID_URL = r'^https?://(?:(?:www\.)?ardmediathek\.de|mediathek\.daserste\.de)/(?:.*/)(?P<video_id>[0-9]+|[^0-9][^/\?]+)[^/\?]*(?:\?.*)?'
+
+    _TESTS = [{
+        'url': 'http://mediathek.daserste.de/sendungen_a-z/328454_anne-will/22429276_vertrauen-ist-gut-spionieren-ist-besser-geht',
+        'only_matching': True,
+    }, {
+        'url': 'http://www.ardmediathek.de/tv/Tatort/Das-Wunder-von-Wolbeck-Video-tgl-ab-20/Das-Erste/Video?documentId=22490580&bcastId=602916',
+        'info_dict': {
+            'id': '22490580',
+            'ext': 'mp4',
+            'title': 'Das Wunder von Wolbeck (Video tgl. ab 20 Uhr)',
+            'description': 'Auf einem restaurierten Hof bei Wolbeck wird der Heilpraktiker Raffael Lembeck eines morgens von seiner Frau Stella tot aufgefunden. Das Opfer war offensichtlich in seiner Praxis zu Fall gekommen und ist dann verblutet, erklärt Prof. Boerne am Tatort.',
+        },
+        'skip': 'Blocked outside of Germany',
+    }]
+
+    def _real_extract(self, url):
+        # determine video id from url
+        m = re.match(self._VALID_URL, url)
+
+        numid = re.search(r'documentId=([0-9]+)', url)
+        if numid:
+            video_id = numid.group(1)
+        else:
+            video_id = m.group('video_id')
+
+        webpage = self._download_webpage(url, video_id)
+
+        if '>Der gewünschte Beitrag ist nicht mehr verfügbar.<' in webpage:
+            raise ExtractorError('Video %s is no longer available' % video_id, expected=True)
+
+        if 'Diese Sendung ist für Jugendliche unter 12 Jahren nicht geeignet. Der Clip ist deshalb nur von 20 bis 6 Uhr verfügbar.' in webpage:
+            raise ExtractorError('This program is only suitable for those aged 12 and older. Video %s is therefore only available between 20 pm and 6 am.' % video_id, expected=True)
+
+        if re.search(r'[\?&]rss($|[=&])', url):
+            doc = parse_xml(webpage)
+            if doc.tag == 'rss':
+                return GenericIE()._extract_rss(url, video_id, doc)
+
+        title = self._html_search_regex(
+            [r'<h1(?:\s+class="boxTopHeadline")?>(.*?)</h1>',
+             r'<meta name="dcterms.title" content="(.*?)"/>',
+             r'<h4 class="headline">(.*?)</h4>'],
+            webpage, 'title')
+        description = self._html_search_meta(
+            'dcterms.abstract', webpage, 'description', default=None)
+        if description is None:
+            description = self._html_search_meta(
+                'description', webpage, 'meta description')
+
+        # Thumbnail is sometimes not present.
+        # It is in the mobile version, but that seems to use a different URL
+        # structure altogether.
+        thumbnail = self._og_search_thumbnail(webpage, default=None)
+
+        media_streams = re.findall(r'''(?x)
+            mediaCollection\.addMediaStream\([0-9]+,\s*[0-9]+,\s*"[^"]*",\s*
+            "([^"]+)"''', webpage)
+
+        if media_streams:
+            QUALITIES = qualities(['lo', 'hi', 'hq'])
+            formats = []
+            for furl in set(media_streams):
+                if furl.endswith('.f4m'):
+                    fid = 'f4m'
+                else:
+                    fid_m = re.match(r'.*\.([^.]+)\.[^.]+$', furl)
+                    fid = fid_m.group(1) if fid_m else None
+                formats.append({
+                    'quality': QUALITIES(fid),
+                    'format_id': fid,
+                    'url': furl,
+                })
+        else:  # request JSON file
+            media_info = self._download_json(
+                'http://www.ardmediathek.de/play/media/%s' % video_id, video_id)
+            # The second element of the _mediaArray contains the standard http urls
+            streams = media_info['_mediaArray'][1]['_mediaStreamArray']
+            if not streams:
+                if '"fsk"' in webpage:
+                    raise ExtractorError('This video is only available after 20:00')
+
+            formats = []
+            for s in streams:
+                if type(s['_stream']) == list:
+                    for index, url in enumerate(s['_stream'][::-1]):
+                        quality = s['_quality'] + index
+                        formats.append({
+                            'quality': quality,
+                            'url': url,
+                            'format_id': '%s-%s' % (determine_ext(url), quality)
+                        })
+                    continue
+
+                format = {
+                    'quality': s['_quality'],
+                    'url': s['_stream'],
+                }
+
+                format['format_id'] = '%s-%s' % (
+                    determine_ext(format['url']), format['quality'])
+
+                formats.append(format)
+
+        self._sort_formats(formats)
+
+        return {
+            'id': video_id,
+            'title': title,
+            'description': description,
+            'formats': formats,
+            'thumbnail': thumbnail,
+        }
+
+
+class ARDIE(InfoExtractor):
+    _VALID_URL = '(?P<mainurl>https?://(www\.)?daserste\.de/[^?#]+/videos/(?P<display_id>[^/?#]+)-(?P<id>[0-9]+))\.html'
+    _TEST = {
+        'url': 'http://www.daserste.de/information/reportage-dokumentation/dokus/videos/die-story-im-ersten-mission-unter-falscher-flagge-100.html',
+        'md5': 'd216c3a86493f9322545e045ddc3eb35',
+        'info_dict': {
+            'display_id': 'die-story-im-ersten-mission-unter-falscher-flagge',
+            'id': '100',
+            'ext': 'mp4',
+            'duration': 2600,
+            'title': 'Die Story im Ersten: Mission unter falscher Flagge',
+            'upload_date': '20140804',
+            'thumbnail': 're:^https?://.*\.jpg$',
+        }
+    }
+
+    def _real_extract(self, url):
+        mobj = re.match(self._VALID_URL, url)
+        display_id = mobj.group('display_id')
+
+        player_url = mobj.group('mainurl') + '~playerXml.xml'
+        doc = self._download_xml(player_url, display_id)
+        video_node = doc.find('./video')
+        upload_date = unified_strdate(xpath_text(
+            video_node, './broadcastDate'))
+        thumbnail = xpath_text(video_node, './/teaserImage//variant/url')
+
+        formats = []
+        for a in video_node.findall('.//asset'):
+            f = {
+                'format_id': a.attrib['type'],
+                'width': int_or_none(a.find('./frameWidth').text),
+                'height': int_or_none(a.find('./frameHeight').text),
+                'vbr': int_or_none(a.find('./bitrateVideo').text),
+                'abr': int_or_none(a.find('./bitrateAudio').text),
+                'vcodec': a.find('./codecVideo').text,
+                'tbr': int_or_none(a.find('./totalBitrate').text),
+            }
+            if a.find('./serverPrefix').text:
+                f['url'] = a.find('./serverPrefix').text
+                f['playpath'] = a.find('./fileName').text
+            else:
+                f['url'] = a.find('./fileName').text
+            formats.append(f)
+        self._sort_formats(formats)
+
+        return {
+            'id': mobj.group('id'),
+            'formats': formats,
+            'display_id': display_id,
+            'title': video_node.find('./title').text,
+            'duration': parse_duration(video_node.find('./duration').text),
+            'upload_date': upload_date,
+            'thumbnail': thumbnail,
+        }

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/arte.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/arte.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/arte.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,254 @@
+# encoding: utf-8
+from __future__ import unicode_literals
+
+import re
+
+from .common import InfoExtractor
+from ..utils import (
+    find_xpath_attr,
+    unified_strdate,
+    get_element_by_attribute,
+    int_or_none,
+    qualities,
+)
+
+# There are different sources of video in arte.tv, the extraction process
+# is different for each one. The videos usually expire in 7 days, so we can't
+# add tests.
+
+
+class ArteTvIE(InfoExtractor):
+    _VALID_URL = r'http://videos\.arte\.tv/(?P<lang>fr|de)/.*-(?P<id>.*?)\.html'
+    IE_NAME = 'arte.tv'
+
+    def _real_extract(self, url):
+        mobj = re.match(self._VALID_URL, url)
+        lang = mobj.group('lang')
+        video_id = mobj.group('id')
+
+        ref_xml_url = url.replace('/videos/', '/do_delegate/videos/')
+        ref_xml_url = ref_xml_url.replace('.html', ',view,asPlayerXml.xml')
+        ref_xml_doc = self._download_xml(
+            ref_xml_url, video_id, note='Downloading metadata')
+        config_node = find_xpath_attr(ref_xml_doc, './/video', 'lang', lang)
+        config_xml_url = config_node.attrib['ref']
+        config = self._download_xml(
+            config_xml_url, video_id, note='Downloading configuration')
+
+        formats = [{
+            'format_id': q.attrib['quality'],
+            # The playpath starts at 'mp4:', if we don't manually
+            # split the url, rtmpdump will incorrectly parse them
+            'url': q.text.split('mp4:', 1)[0],
+            'play_path': 'mp4:' + q.text.split('mp4:', 1)[1],
+            'ext': 'flv',
+            'quality': 2 if q.attrib['quality'] == 'hd' else 1,
+        } for q in config.findall('./urls/url')]
+        self._sort_formats(formats)
+
+        title = config.find('.//name').text
+        thumbnail = config.find('.//firstThumbnailUrl').text
+        return {
+            'id': video_id,
+            'title': title,
+            'thumbnail': thumbnail,
+            'formats': formats,
+        }
+
+
+class ArteTVPlus7IE(InfoExtractor):
+    IE_NAME = 'arte.tv:+7'
+    _VALID_URL = r'https?://(?:www\.)?arte\.tv/guide/(?P<lang>fr|de)/(?:(?:sendungen|emissions)/)?(?P<id>.*?)/(?P<name>.*?)(\?.*)?'
+
+    @classmethod
+    def _extract_url_info(cls, url):
+        mobj = re.match(cls._VALID_URL, url)
+        lang = mobj.group('lang')
+        # This is not a real id, it can be for example AJT for the news
+        # http://www.arte.tv/guide/fr/emissions/AJT/arte-journal
+        video_id = mobj.group('id')
+        return video_id, lang
+
+    def _real_extract(self, url):
+        video_id, lang = self._extract_url_info(url)
+        webpage = self._download_webpage(url, video_id)
+        return self._extract_from_webpage(webpage, video_id, lang)
+
+    def _extract_from_webpage(self, webpage, video_id, lang):
+        json_url = self._html_search_regex(
+            [r'arte_vp_url=["\'](.*?)["\']', r'data-url=["\']([^"]+)["\']'],
+            webpage, 'json vp url')
+        return self._extract_from_json_url(json_url, video_id, lang)
+
+    def _extract_from_json_url(self, json_url, video_id, lang):
+        info = self._download_json(json_url, video_id)
+        player_info = info['videoJsonPlayer']
+
+        upload_date_str = player_info.get('shootingDate')
+        if not upload_date_str:
+            upload_date_str = player_info.get('VDA', '').split(' ')[0]
+
+        title = player_info['VTI'].strip()
+        subtitle = player_info.get('VSU', '').strip()
+        if subtitle:
+            title += ' - %s' % subtitle
+
+        info_dict = {
+            'id': player_info['VID'],
+            'title': title,
+            'description': player_info.get('VDE'),
+            'upload_date': unified_strdate(upload_date_str),
+            'thumbnail': player_info.get('programImage') or player_info.get('VTU', {}).get('IUR'),
+        }
+        qfunc = qualities(['HQ', 'MQ', 'EQ', 'SQ'])
+
+        formats = []
+        for format_id, format_dict in player_info['VSR'].items():
+            f = dict(format_dict)
+            versionCode = f.get('versionCode')
+
+            langcode = {
+                'fr': 'F',
+                'de': 'A',
+            }.get(lang, lang)
+            lang_rexs = [r'VO?%s' % langcode, r'VO?.-ST%s' % langcode]
+            lang_pref = (
+                None if versionCode is None else (
+                    10 if any(re.match(r, versionCode) for r in lang_rexs)
+                    else -10))
+            source_pref = 0
+            if versionCode is not None:
+                # The original version with subtitles has lower relevance
+                if re.match(r'VO-ST(F|A)', versionCode):
+                    source_pref -= 10
+                # The version with sourds/mal subtitles has also lower relevance
+                elif re.match(r'VO?(F|A)-STM\1', versionCode):
+                    source_pref -= 9
+            format = {
+                'format_id': format_id,
+                'preference': -10 if f.get('videoFormat') == 'M3U8' else None,
+                'language_preference': lang_pref,
+                'format_note': '%s, %s' % (f.get('versionCode'), f.get('versionLibelle')),
+                'width': int_or_none(f.get('width')),
+                'height': int_or_none(f.get('height')),
+                'tbr': int_or_none(f.get('bitrate')),
+                'quality': qfunc(f.get('quality')),
+                'source_preference': source_pref,
+            }
+
+            if f.get('mediaType') == 'rtmp':
+                format['url'] = f['streamer']
+                format['play_path'] = 'mp4:' + f['url']
+                format['ext'] = 'flv'
+            else:
+                format['url'] = f['url']
+
+            formats.append(format)
+
+        self._check_formats(formats, video_id)
+        self._sort_formats(formats)
+
+        info_dict['formats'] = formats
+        return info_dict
+
+
+# It also uses the arte_vp_url url from the webpage to extract the information
+class ArteTVCreativeIE(ArteTVPlus7IE):
+    IE_NAME = 'arte.tv:creative'
+    _VALID_URL = r'https?://creative\.arte\.tv/(?P<lang>fr|de)/(?:magazine?/)?(?P<id>[^?#]+)'
+
+    _TESTS = [{
+        'url': 'http://creative.arte.tv/de/magazin/agentur-amateur-corporate-design',
+        'info_dict': {
+            'id': '72176',
+            'ext': 'mp4',
+            'title': 'Folge 2 - Corporate Design',
+            'upload_date': '20131004',
+        },
+    }, {
+        'url': 'http://creative.arte.tv/fr/Monty-Python-Reunion',
+        'info_dict': {
+            'id': '160676',
+            'ext': 'mp4',
+            'title': 'Monty Python live (mostly)',
+            'description': 'Événement ! Quarante-cinq ans après leurs premiers succès, les légendaires Monty Python remontent sur scène.\n',
+            'upload_date': '20140805',
+        }
+    }]
+
+
+class ArteTVFutureIE(ArteTVPlus7IE):
+    IE_NAME = 'arte.tv:future'
+    _VALID_URL = r'https?://future\.arte\.tv/(?P<lang>fr|de)/(thema|sujet)/.*?#article-anchor-(?P<id>\d+)'
+
+    _TEST = {
+        'url': 'http://future.arte.tv/fr/sujet/info-sciences#article-anchor-7081',
+        'info_dict': {
+            'id': '5201',
+            'ext': 'mp4',
+            'title': 'Les champignons au secours de la planète',
+            'upload_date': '20131101',
+        },
+    }
+
+    def _real_extract(self, url):
+        anchor_id, lang = self._extract_url_info(url)
+        webpage = self._download_webpage(url, anchor_id)
+        row = self._search_regex(
+            r'(?s)id="%s"[^>]*>.+?(<div[^>]*arte_vp_url[^>]*>)' % anchor_id,
+            webpage, 'row')
+        return self._extract_from_webpage(row, anchor_id, lang)
+
+
+class ArteTVDDCIE(ArteTVPlus7IE):
+    IE_NAME = 'arte.tv:ddc'
+    _VALID_URL = r'https?://ddc\.arte\.tv/(?P<lang>emission|folge)/(?P<id>.+)'
+
+    def _real_extract(self, url):
+        video_id, lang = self._extract_url_info(url)
+        if lang == 'folge':
+            lang = 'de'
+        elif lang == 'emission':
+            lang = 'fr'
+        webpage = self._download_webpage(url, video_id)
+        scriptElement = get_element_by_attribute('class', 'visu_video_block', webpage)
+        script_url = self._html_search_regex(r'src="(.*?)"', scriptElement, 'script url')
+        javascriptPlayerGenerator = self._download_webpage(script_url, video_id, 'Download javascript player generator')
+        json_url = self._search_regex(r"json_url=(.*)&rendering_place.*", javascriptPlayerGenerator, 'json url')
+        return self._extract_from_json_url(json_url, video_id, lang)
+
+
+class ArteTVConcertIE(ArteTVPlus7IE):
+    IE_NAME = 'arte.tv:concert'
+    _VALID_URL = r'https?://concert\.arte\.tv/(?P<lang>de|fr)/(?P<id>.+)'
+
+    _TEST = {
+        'url': 'http://concert.arte.tv/de/notwist-im-pariser-konzertclub-divan-du-monde',
+        'md5': '9ea035b7bd69696b67aa2ccaaa218161',
+        'info_dict': {
+            'id': '186',
+            'ext': 'mp4',
+            'title': 'The Notwist im Pariser Konzertclub "Divan du Monde"',
+            'upload_date': '20140128',
+            'description': 'md5:486eb08f991552ade77439fe6d82c305',
+        },
+    }
+
+
+class ArteTVEmbedIE(ArteTVPlus7IE):
+    IE_NAME = 'arte.tv:embed'
+    _VALID_URL = r'''(?x)
+        http://www\.arte\.tv
+        /playerv2/embed\.php\?json_url=
+        (?P<json_url>
+            http://arte\.tv/papi/tvguide/videos/stream/player/
+            (?P<lang>[^/]+)/(?P<id>[^/]+)[^&]*
+        )
+    '''
+
+    def _real_extract(self, url):
+        mobj = re.match(self._VALID_URL, url)
+        video_id = mobj.group('id')
+        lang = mobj.group('lang')
+        json_url = mobj.group('json_url')
+        return self._extract_from_json_url(json_url, video_id, lang)

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/atresplayer.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/atresplayer.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/atresplayer.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,163 @@
+from __future__ import unicode_literals
+
+import time
+import hmac
+
+from .common import InfoExtractor
+from ..compat import (
+    compat_str,
+    compat_urllib_parse,
+    compat_urllib_request,
+)
+from ..utils import (
+    int_or_none,
+    float_or_none,
+    xpath_text,
+    ExtractorError,
+)
+
+
+class AtresPlayerIE(InfoExtractor):
+    _VALID_URL = r'https?://(?:www\.)?atresplayer\.com/television/[^/]+/[^/]+/[^/]+/(?P<id>.+?)_\d+\.html'
+    _NETRC_MACHINE = 'atresplayer'
+    _TESTS = [
+        {
+            'url': 'http://www.atresplayer.com/television/programas/el-club-de-la-comedia/temporada-4/capitulo-10-especial-solidario-nochebuena_2014122100174.html',
+            'md5': 'efd56753cda1bb64df52a3074f62e38a',
+            'info_dict': {
+                'id': 'capitulo-10-especial-solidario-nochebuena',
+                'ext': 'mp4',
+                'title': 'Especial Solidario de Nochebuena',
+                'description': 'md5:e2d52ff12214fa937107d21064075bf1',
+                'duration': 5527.6,
+                'thumbnail': 're:^https?://.*\.jpg$',
+            },
+        },
+        {
+            'url': 'http://www.atresplayer.com/television/series/el-secreto-de-puente-viejo/el-chico-de-los-tres-lunares/capitulo-977-29-12-14_2014122400174.html',
+            'only_matching': True,
+        },
+    ]
+
+    _USER_AGENT = 'Dalvik/1.6.0 (Linux; U; Android 4.3; GT-I9300 Build/JSS15J'
+    _MAGIC = 'QWtMLXs414Yo+c#_+Q#K@NN)'
+    _TIMESTAMP_SHIFT = 30000
+
+    _TIME_API_URL = 'http://servicios.atresplayer.com/api/admin/time.json'
+    _URL_VIDEO_TEMPLATE = 'https://servicios.atresplayer.com/api/urlVideo/{1}/{0}/{1}|{2}|{3}.json'
+    _PLAYER_URL_TEMPLATE = 'https://servicios.atresplayer.com/episode/getplayer.json?episodePk=%s'
+    _EPISODE_URL_TEMPLATE = 'http://www.atresplayer.com/episodexml/%s'
+
+    _LOGIN_URL = 'https://servicios.atresplayer.com/j_spring_security_check'
+
+    def _real_initialize(self):
+        self._login()
+
+    def _login(self):
+        (username, password) = self._get_login_info()
+        if username is None:
+            return
+
+        login_form = {
+            'j_username': username,
+            'j_password': password,
+        }
+
+        request = compat_urllib_request.Request(
+            self._LOGIN_URL, compat_urllib_parse.urlencode(login_form).encode('utf-8'))
+        request.add_header('Content-Type', 'application/x-www-form-urlencoded')
+        response = self._download_webpage(
+            request, None, 'Logging in as %s' % username)
+
+        error = self._html_search_regex(
+            r'(?s)<ul class="list_error">(.+?)</ul>', response, 'error', default=None)
+        if error:
+            raise ExtractorError(
+                'Unable to login: %s' % error, expected=True)
+
+    def _real_extract(self, url):
+        video_id = self._match_id(url)
+
+        webpage = self._download_webpage(url, video_id)
+
+        episode_id = self._search_regex(
+            r'episode="([^"]+)"', webpage, 'episode id')
+
+        timestamp = int_or_none(self._download_webpage(
+            self._TIME_API_URL,
+            video_id, 'Downloading timestamp', fatal=False), 1000, time.time())
+        timestamp_shifted = compat_str(timestamp + self._TIMESTAMP_SHIFT)
+        token = hmac.new(
+            self._MAGIC.encode('ascii'),
+            (episode_id + timestamp_shifted).encode('utf-8')
+        ).hexdigest()
+
+        formats = []
+        for fmt in ['windows', 'android_tablet']:
+            request = compat_urllib_request.Request(
+                self._URL_VIDEO_TEMPLATE.format(fmt, episode_id, timestamp_shifted, token))
+            request.add_header('User-Agent', self._USER_AGENT)
+
+            fmt_json = self._download_json(
+                request, video_id, 'Downloading %s video JSON' % fmt)
+
+            result = fmt_json.get('resultDes')
+            if result.lower() != 'ok':
+                raise ExtractorError(
+                    '%s returned error: %s' % (self.IE_NAME, result), expected=True)
+
+            for format_id, video_url in fmt_json['resultObject'].items():
+                if format_id == 'token' or not video_url.startswith('http'):
+                    continue
+                if video_url.endswith('/Manifest'):
+                    if 'geodeswowsmpra3player' in video_url:
+                        f4m_path = video_url.split('smil:', 1)[-1].split('free_', 1)[0]
+                        f4m_url = 'http://drg.antena3.com/{0}hds/es/sd.f4m'.format(f4m_path)
+                        # this videos are protected by DRM, the f4m downloader doesn't support them
+                        continue
+                    else:
+                        f4m_url = video_url[:-9] + '/manifest.f4m'
+                    formats.extend(self._extract_f4m_formats(f4m_url, video_id))
+                else:
+                    formats.append({
+                        'url': video_url,
+                        'format_id': 'android-%s' % format_id,
+                        'preference': 1,
+                    })
+        self._sort_formats(formats)
+
+        player = self._download_json(
+            self._PLAYER_URL_TEMPLATE % episode_id,
+            episode_id)
+
+        path_data = player.get('pathData')
+
+        episode = self._download_xml(
+            self._EPISODE_URL_TEMPLATE % path_data,
+            video_id, 'Downloading episode XML')
+
+        duration = float_or_none(xpath_text(
+            episode, './media/asset/info/technical/contentDuration', 'duration'))
+
+        art = episode.find('./media/asset/info/art')
+        title = xpath_text(art, './name', 'title')
+        description = xpath_text(art, './description', 'description')
+        thumbnail = xpath_text(episode, './media/asset/files/background', 'thumbnail')
+
+        subtitles = {}
+        subtitle_url = xpath_text(episode, './media/asset/files/subtitle', 'subtitle')
+        if subtitle_url:
+            subtitles['es'] = [{
+                'ext': 'srt',
+                'url': subtitle_url,
+            }]
+
+        return {
+            'id': video_id,
+            'title': title,
+            'description': description,
+            'thumbnail': thumbnail,
+            'duration': duration,
+            'formats': formats,
+            'subtitles': subtitles,
+        }

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/atttechchannel.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/atttechchannel.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/atttechchannel.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,55 @@
+from __future__ import unicode_literals
+
+from .common import InfoExtractor
+from ..utils import unified_strdate
+
+
+class ATTTechChannelIE(InfoExtractor):
+    _VALID_URL = r'https?://techchannel\.att\.com/play-video\.cfm/([^/]+/)*(?P<id>.+)'
+    _TEST = {
+        'url': 'http://techchannel.att.com/play-video.cfm/2014/1/27/ATT-Archives-The-UNIX-System-Making-Computers-Easier-to-Use',
+        'info_dict': {
+            'id': '11316',
+            'display_id': 'ATT-Archives-The-UNIX-System-Making-Computers-Easier-to-Use',
+            'ext': 'flv',
+            'title': 'AT&T Archives : The UNIX System: Making Computers Easier to Use',
+            'description': 'A 1982 film about UNIX is the foundation for software in use around Bell Labs and AT&T.',
+            'thumbnail': 're:^https?://.*\.jpg$',
+            'upload_date': '20140127',
+        },
+        'params': {
+            # rtmp download
+            'skip_download': True,
+        },
+    }
+
+    def _real_extract(self, url):
+        display_id = self._match_id(url)
+
+        webpage = self._download_webpage(url, display_id)
+
+        video_url = self._search_regex(
+            r"url\s*:\s*'(rtmp://[^']+)'",
+            webpage, 'video URL')
+
+        video_id = self._search_regex(
+            r'mediaid\s*=\s*(\d+)',
+            webpage, 'video id', fatal=False)
+
+        title = self._og_search_title(webpage)
+        description = self._og_search_description(webpage)
+        thumbnail = self._og_search_thumbnail(webpage)
+        upload_date = unified_strdate(self._search_regex(
+            r'[Rr]elease\s+date:\s*(\d{1,2}/\d{1,2}/\d{4})',
+            webpage, 'upload date', fatal=False), False)
+
+        return {
+            'id': video_id,
+            'display_id': display_id,
+            'url': video_url,
+            'ext': 'flv',
+            'title': title,
+            'description': description,
+            'thumbnail': thumbnail,
+            'upload_date': upload_date,
+        }

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/audiomack.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/audiomack.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/audiomack.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,144 @@
+# coding: utf-8
+from __future__ import unicode_literals
+
+import itertools
+import time
+
+from .common import InfoExtractor
+from .soundcloud import SoundcloudIE
+from ..utils import (
+    ExtractorError,
+    url_basename,
+)
+
+
+class AudiomackIE(InfoExtractor):
+    _VALID_URL = r'https?://(?:www\.)?audiomack\.com/song/(?P<id>[\w/-]+)'
+    IE_NAME = 'audiomack'
+    _TESTS = [
+        # hosted on audiomack
+        {
+            'url': 'http://www.audiomack.com/song/roosh-williams/extraordinary',
+            'info_dict':
+            {
+                'id': '310086',
+                'ext': 'mp3',
+                'uploader': 'Roosh Williams',
+                'title': 'Extraordinary'
+            }
+        },
+        # audiomack wrapper around soundcloud song
+        {
+            'add_ie': ['Soundcloud'],
+            'url': 'http://www.audiomack.com/song/xclusiveszone/take-kare',
+            'info_dict': {
+                'id': '172419696',
+                'ext': 'mp3',
+                'description': 'md5:1fc3272ed7a635cce5be1568c2822997',
+                'title': 'Young Thug ft Lil Wayne - Take Kare',
+                'uploader': 'Young Thug World',
+                'upload_date': '20141016',
+            }
+        },
+    ]
+
+    def _real_extract(self, url):
+        # URLs end with [uploader name]/[uploader title]
+        # this title is whatever the user types in, and is rarely
+        # the proper song title.  Real metadata is in the api response
+        album_url_tag = self._match_id(url)
+
+        # Request the extended version of the api for extra fields like artist and title
+        api_response = self._download_json(
+            'http://www.audiomack.com/api/music/url/song/%s?extended=1&_=%d' % (
+                album_url_tag, time.time()),
+            album_url_tag)
+
+        # API is inconsistent with errors
+        if 'url' not in api_response or not api_response['url'] or 'error' in api_response:
+            raise ExtractorError('Invalid url %s', url)
+
+        # Audiomack wraps a lot of soundcloud tracks in their branded wrapper
+        # if so, pass the work off to the soundcloud extractor
+        if SoundcloudIE.suitable(api_response['url']):
+            return {'_type': 'url', 'url': api_response['url'], 'ie_key': 'Soundcloud'}
+
+        return {
+            'id': api_response.get('id', album_url_tag),
+            'uploader': api_response.get('artist'),
+            'title': api_response.get('title'),
+            'url': api_response['url'],
+        }
+
+
+class AudiomackAlbumIE(InfoExtractor):
+    _VALID_URL = r'https?://(?:www\.)?audiomack\.com/album/(?P<id>[\w/-]+)'
+    IE_NAME = 'audiomack:album'
+    _TESTS = [
+        # Standard album playlist
+        {
+            'url': 'http://www.audiomack.com/album/flytunezcom/tha-tour-part-2-mixtape',
+            'playlist_count': 15,
+            'info_dict':
+            {
+                'id': '812251',
+                'title': 'Tha Tour: Part 2 (Official Mixtape)'
+            }
+        },
+        # Album playlist ripped from fakeshoredrive with no metadata
+        {
+            'url': 'http://www.audiomack.com/album/fakeshoredrive/ppp-pistol-p-project',
+            'info_dict': {
+                'title': 'PPP (Pistol P Project)',
+                'id': '837572',
+            },
+            'playlist': [{
+                'info_dict': {
+                    'title': 'PPP (Pistol P Project) - 9. Heaven or Hell (CHIMACA) ft Zuse (prod by DJ FU)',
+                    'id': '837577',
+                    'ext': 'mp3',
+                    'uploader': 'Lil Herb a.k.a. G Herbo',
+                }
+            }],
+            'params': {
+                'playliststart': 9,
+                'playlistend': 9,
+            }
+        }
+    ]
+
+    def _real_extract(self, url):
+        # URLs end with [uploader name]/[uploader title]
+        # this title is whatever the user types in, and is rarely
+        # the proper song title.  Real metadata is in the api response
+        album_url_tag = self._match_id(url)
+        result = {'_type': 'playlist', 'entries': []}
+        # There is no one endpoint for album metadata - instead it is included/repeated in each song's metadata
+        # Therefore we don't know how many songs the album has and must infi-loop until failure
+        for track_no in itertools.count():
+            # Get song's metadata
+            api_response = self._download_json(
+                'http://www.audiomack.com/api/music/url/album/%s/%d?extended=1&_=%d'
+                % (album_url_tag, track_no, time.time()), album_url_tag,
+                note='Querying song information (%d)' % (track_no + 1))
+
+            # Total failure, only occurs when url is totally wrong
+            # Won't happen in middle of valid playlist (next case)
+            if 'url' not in api_response or 'error' in api_response:
+                raise ExtractorError('Invalid url for track %d of album url %s' % (track_no, url))
+            # URL is good but song id doesn't exist - usually means end of playlist
+            elif not api_response['url']:
+                break
+            else:
+                # Pull out the album metadata and add to result (if it exists)
+                for resultkey, apikey in [('id', 'album_id'), ('title', 'album_title')]:
+                    if apikey in api_response and resultkey not in result:
+                        result[resultkey] = api_response[apikey]
+                song_id = url_basename(api_response['url']).rpartition('.')[0]
+                result['entries'].append({
+                    'id': api_response.get('id', song_id),
+                    'uploader': api_response.get('artist'),
+                    'title': api_response.get('title', song_id),
+                    'url': api_response['url'],
+                })
+        return result

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/azubu.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/azubu.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/azubu.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,93 @@
+from __future__ import unicode_literals
+
+import json
+
+from .common import InfoExtractor
+from ..utils import float_or_none
+
+
+class AzubuIE(InfoExtractor):
+    _VALID_URL = r'https?://(?:www\.)?azubu\.tv/[^/]+#!/play/(?P<id>\d+)'
+    _TESTS = [
+        {
+            'url': 'http://www.azubu.tv/GSL#!/play/15575/2014-hot6-cup-last-big-match-ro8-day-1',
+            'md5': 'a88b42fcf844f29ad6035054bd9ecaf4',
+            'info_dict': {
+                'id': '15575',
+                'ext': 'mp4',
+                'title': '2014 HOT6 CUP LAST BIG MATCH Ro8 Day 1',
+                'description': 'md5:d06bdea27b8cc4388a90ad35b5c66c01',
+                'thumbnail': 're:^https?://.*\.jpe?g',
+                'timestamp': 1417523507.334,
+                'upload_date': '20141202',
+                'duration': 9988.7,
+                'uploader': 'GSL',
+                'uploader_id': 414310,
+                'view_count': int,
+            },
+        },
+        {
+            'url': 'http://www.azubu.tv/FnaticTV#!/play/9344/-fnatic-at-worlds-2014:-toyz---%22i-love-rekkles,-he-has-amazing-mechanics%22-',
+            'md5': 'b72a871fe1d9f70bd7673769cdb3b925',
+            'info_dict': {
+                'id': '9344',
+                'ext': 'mp4',
+                'title': 'Fnatic at Worlds 2014: Toyz - "I love Rekkles, he has amazing mechanics"',
+                'description': 'md5:4a649737b5f6c8b5c5be543e88dc62af',
+                'thumbnail': 're:^https?://.*\.jpe?g',
+                'timestamp': 1410530893.320,
+                'upload_date': '20140912',
+                'duration': 172.385,
+                'uploader': 'FnaticTV',
+                'uploader_id': 272749,
+                'view_count': int,
+            },
+        },
+    ]
+
+    def _real_extract(self, url):
+        video_id = self._match_id(url)
+
+        data = self._download_json(
+            'http://www.azubu.tv/api/video/%s' % video_id, video_id)['data']
+
+        title = data['title'].strip()
+        description = data['description']
+        thumbnail = data['thumbnail']
+        view_count = data['view_count']
+        uploader = data['user']['username']
+        uploader_id = data['user']['id']
+
+        stream_params = json.loads(data['stream_params'])
+
+        timestamp = float_or_none(stream_params['creationDate'], 1000)
+        duration = float_or_none(stream_params['length'], 1000)
+
+        renditions = stream_params.get('renditions') or []
+        video = stream_params.get('FLVFullLength') or stream_params.get('videoFullLength')
+        if video:
+            renditions.append(video)
+
+        formats = [{
+            'url': fmt['url'],
+            'width': fmt['frameWidth'],
+            'height': fmt['frameHeight'],
+            'vbr': float_or_none(fmt['encodingRate'], 1000),
+            'filesize': fmt['size'],
+            'vcodec': fmt['videoCodec'],
+            'container': fmt['videoContainer'],
+        } for fmt in renditions if fmt['url']]
+        self._sort_formats(formats)
+
+        return {
+            'id': video_id,
+            'title': title,
+            'description': description,
+            'thumbnail': thumbnail,
+            'timestamp': timestamp,
+            'duration': duration,
+            'uploader': uploader,
+            'uploader_id': uploader_id,
+            'view_count': view_count,
+            'formats': formats,
+        }

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/baidu.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/baidu.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/baidu.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,69 @@
+# coding: utf-8
+from __future__ import unicode_literals
+
+import re
+
+from .common import InfoExtractor
+from ..compat import compat_urlparse
+
+
+class BaiduVideoIE(InfoExtractor):
+    IE_DESC = '百度视频'
+    _VALID_URL = r'http://v\.baidu\.com/(?P<type>[a-z]+)/(?P<id>\d+)\.htm'
+    _TESTS = [{
+        'url': 'http://v.baidu.com/comic/1069.htm?frp=bdbrand&q=%E4%B8%AD%E5%8D%8E%E5%B0%8F%E5%BD%93%E5%AE%B6',
+        'info_dict': {
+            'id': '1069',
+            'title': '中华小当家 TV版 (全52集)',
+            'description': 'md5:395a419e41215e531c857bb037bbaf80',
+        },
+        'playlist_count': 52,
+    }, {
+        'url': 'http://v.baidu.com/show/11595.htm?frp=bdbrand',
+        'info_dict': {
+            'id': '11595',
+            'title': 're:^奔跑吧兄弟',
+            'description': 'md5:1bf88bad6d850930f542d51547c089b8',
+        },
+        'playlist_mincount': 3,
+    }]
+
+    def _real_extract(self, url):
+        mobj = re.match(self._VALID_URL, url)
+        playlist_id = mobj.group('id')
+        category = category2 = mobj.group('type')
+        if category == 'show':
+            category2 = 'tvshow'
+
+        webpage = self._download_webpage(url, playlist_id)
+
+        playlist_title = self._html_search_regex(
+            r'title\s*:\s*(["\'])(?P<title>[^\']+)\1', webpage,
+            'playlist title', group='title')
+        playlist_description = self._html_search_regex(
+            r'<input[^>]+class="j-data-intro"[^>]+value="([^"]+)"/>', webpage,
+            playlist_id, 'playlist description')
+
+        site = self._html_search_regex(
+            r'filterSite\s*:\s*["\']([^"]*)["\']', webpage,
+            'primary provider site')
+        api_result = self._download_json(
+            'http://v.baidu.com/%s_intro/?dtype=%sPlayUrl&id=%s&site=%s' % (
+                category, category2, playlist_id, site),
+            playlist_id, 'Get playlist links')
+
+        entries = []
+        for episode in api_result[0]['episodes']:
+            episode_id = '%s_%s' % (playlist_id, episode['episode'])
+
+            redirect_page = self._download_webpage(
+                compat_urlparse.urljoin(url, episode['url']), episode_id,
+                note='Download Baidu redirect page')
+            real_url = self._html_search_regex(
+                r'location\.replace\("([^"]+)"\)', redirect_page, 'real URL')
+
+            entries.append(self.url_result(
+                real_url, video_title=episode['single_title']))
+
+        return self.playlist_result(
+            entries, playlist_id, playlist_title, playlist_description)

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/bambuser.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/bambuser.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/bambuser.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,144 @@
+from __future__ import unicode_literals
+
+import re
+import itertools
+
+from .common import InfoExtractor
+from ..compat import (
+    compat_urllib_parse,
+    compat_urllib_request,
+    compat_str,
+)
+from ..utils import (
+    ExtractorError,
+    int_or_none,
+    float_or_none,
+)
+
+
+class BambuserIE(InfoExtractor):
+    IE_NAME = 'bambuser'
+    _VALID_URL = r'https?://bambuser\.com/v/(?P<id>\d+)'
+    _API_KEY = '005f64509e19a868399060af746a00aa'
+    _LOGIN_URL = 'https://bambuser.com/user'
+    _NETRC_MACHINE = 'bambuser'
+
+    _TEST = {
+        'url': 'http://bambuser.com/v/4050584',
+        # MD5 seems to be flaky, see https://travis-ci.org/rg3/youtube-dl/jobs/14051016#L388
+        # 'md5': 'fba8f7693e48fd4e8641b3fd5539a641',
+        'info_dict': {
+            'id': '4050584',
+            'ext': 'flv',
+            'title': 'Education engineering days - lightning talks',
+            'duration': 3741,
+            'uploader': 'pixelversity',
+            'uploader_id': '344706',
+            'timestamp': 1382976692,
+            'upload_date': '20131028',
+            'view_count': int,
+        },
+        'params': {
+            # It doesn't respect the 'Range' header, it would download the whole video
+            # caused the travis builds to fail: https://travis-ci.org/rg3/youtube-dl/jobs/14493845#L59
+            'skip_download': True,
+        },
+    }
+
+    def _login(self):
+        (username, password) = self._get_login_info()
+        if username is None:
+            return
+
+        login_form = {
+            'form_id': 'user_login',
+            'op': 'Log in',
+            'name': username,
+            'pass': password,
+        }
+
+        request = compat_urllib_request.Request(
+            self._LOGIN_URL, compat_urllib_parse.urlencode(login_form).encode('utf-8'))
+        request.add_header('Referer', self._LOGIN_URL)
+        response = self._download_webpage(
+            request, None, 'Logging in as %s' % username)
+
+        login_error = self._html_search_regex(
+            r'(?s)<div class="messages error">(.+?)</div>',
+            response, 'login error', default=None)
+        if login_error:
+            raise ExtractorError(
+                'Unable to login: %s' % login_error, expected=True)
+
+    def _real_initialize(self):
+        self._login()
+
+    def _real_extract(self, url):
+        video_id = self._match_id(url)
+
+        info = self._download_json(
+            'http://player-c.api.bambuser.com/getVideo.json?api_key=%s&vid=%s'
+            % (self._API_KEY, video_id), video_id)
+
+        error = info.get('error')
+        if error:
+            raise ExtractorError(
+                '%s returned error: %s' % (self.IE_NAME, error), expected=True)
+
+        result = info['result']
+
+        return {
+            'id': video_id,
+            'title': result['title'],
+            'url': result['url'],
+            'thumbnail': result.get('preview'),
+            'duration': int_or_none(result.get('length')),
+            'uploader': result.get('username'),
+            'uploader_id': compat_str(result.get('owner', {}).get('uid')),
+            'timestamp': int_or_none(result.get('created')),
+            'fps': float_or_none(result.get('framerate')),
+            'view_count': int_or_none(result.get('views_total')),
+            'comment_count': int_or_none(result.get('comment_count')),
+        }
+
+
+class BambuserChannelIE(InfoExtractor):
+    IE_NAME = 'bambuser:channel'
+    _VALID_URL = r'https?://bambuser\.com/channel/(?P<user>.*?)(?:/|#|\?|$)'
+    # The maximum number we can get with each request
+    _STEP = 50
+    _TEST = {
+        'url': 'http://bambuser.com/channel/pixelversity',
+        'info_dict': {
+            'title': 'pixelversity',
+        },
+        'playlist_mincount': 60,
+    }
+
+    def _real_extract(self, url):
+        mobj = re.match(self._VALID_URL, url)
+        user = mobj.group('user')
+        urls = []
+        last_id = ''
+        for i in itertools.count(1):
+            req_url = (
+                'http://bambuser.com/xhr-api/index.php?username={user}'
+                '&sort=created&access_mode=0%2C1%2C2&limit={count}'
+                '&method=broadcast&format=json&vid_older_than={last}'
+            ).format(user=user, count=self._STEP, last=last_id)
+            req = compat_urllib_request.Request(req_url)
+            # Without setting this header, we wouldn't get any result
+            req.add_header('Referer', 'http://bambuser.com/channel/%s' % user)
+            data = self._download_json(
+                req, user, 'Downloading page %d' % i)
+            results = data['result']
+            if not results:
+                break
+            last_id = results[-1]['vid']
+            urls.extend(self.url_result(v['page'], 'Bambuser') for v in results)
+
+        return {
+            '_type': 'playlist',
+            'title': user,
+            'entries': urls,
+        }

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/bandcamp.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/bandcamp.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/bandcamp.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,181 @@
+from __future__ import unicode_literals
+
+import json
+import re
+
+from .common import InfoExtractor
+from ..compat import (
+    compat_str,
+    compat_urlparse,
+)
+from ..utils import (
+    ExtractorError,
+)
+
+
+class BandcampIE(InfoExtractor):
+    _VALID_URL = r'https?://.*?\.bandcamp\.com/track/(?P<title>.*)'
+    _TESTS = [{
+        'url': 'http://youtube-dl.bandcamp.com/track/youtube-dl-test-song',
+        'md5': 'c557841d5e50261777a6585648adf439',
+        'info_dict': {
+            'id': '1812978515',
+            'ext': 'mp3',
+            'title': "youtube-dl  \"'/\\\u00e4\u21ad - youtube-dl test song \"'/\\\u00e4\u21ad",
+            'duration': 9.8485,
+        },
+        '_skip': 'There is a limit of 200 free downloads / month for the test song'
+    }, {
+        'url': 'http://benprunty.bandcamp.com/track/lanius-battle',
+        'md5': '2b68e5851514c20efdff2afc5603b8b4',
+        'info_dict': {
+            'id': '2650410135',
+            'ext': 'mp3',
+            'title': 'Lanius (Battle)',
+            'uploader': 'Ben Prunty Music',
+        },
+    }]
+
+    def _real_extract(self, url):
+        mobj = re.match(self._VALID_URL, url)
+        title = mobj.group('title')
+        webpage = self._download_webpage(url, title)
+        m_download = re.search(r'freeDownloadPage: "(.*?)"', webpage)
+        if not m_download:
+            m_trackinfo = re.search(r'trackinfo: (.+),\s*?\n', webpage)
+            if m_trackinfo:
+                json_code = m_trackinfo.group(1)
+                data = json.loads(json_code)[0]
+
+                formats = []
+                for format_id, format_url in data['file'].items():
+                    ext, abr_str = format_id.split('-', 1)
+                    formats.append({
+                        'format_id': format_id,
+                        'url': format_url,
+                        'ext': ext,
+                        'vcodec': 'none',
+                        'acodec': ext,
+                        'abr': int(abr_str),
+                    })
+
+                self._sort_formats(formats)
+
+                return {
+                    'id': compat_str(data['id']),
+                    'title': data['title'],
+                    'formats': formats,
+                    'duration': float(data['duration']),
+                }
+            else:
+                raise ExtractorError('No free songs found')
+
+        download_link = m_download.group(1)
+        video_id = self._search_regex(
+            r'(?ms)var TralbumData = .*?[{,]\s*id: (?P<id>\d+),?$',
+            webpage, 'video id')
+
+        download_webpage = self._download_webpage(download_link, video_id, 'Downloading free downloads page')
+        # We get the dictionary of the track from some javascript code
+        all_info = self._parse_json(self._search_regex(
+            r'(?sm)items: (.*?),$', download_webpage, 'items'), video_id)
+        info = all_info[0]
+        # We pick mp3-320 for now, until format selection can be easily implemented.
+        mp3_info = info['downloads']['mp3-320']
+        # If we try to use this url it says the link has expired
+        initial_url = mp3_info['url']
+        m_url = re.match(
+            r'(?P<server>http://(.*?)\.bandcamp\.com)/download/track\?enc=mp3-320&fsig=(?P<fsig>.*?)&id=(?P<id>.*?)&ts=(?P<ts>.*)$',
+            initial_url)
+        # We build the url we will use to get the final track url
+        # This url is build in Bandcamp in the script download_bunde_*.js
+        request_url = '%s/statdownload/track?enc=mp3-320&fsig=%s&id=%s&ts=%s&.rand=665028774616&.vrs=1' % (m_url.group('server'), m_url.group('fsig'), video_id, m_url.group('ts'))
+        final_url_webpage = self._download_webpage(request_url, video_id, 'Requesting download url')
+        # If we could correctly generate the .rand field the url would be
+        # in the "download_url" key
+        final_url = self._search_regex(
+            r'"retry_url":"(.*?)"', final_url_webpage, 'final video URL')
+
+        return {
+            'id': video_id,
+            'title': info['title'],
+            'ext': 'mp3',
+            'vcodec': 'none',
+            'url': final_url,
+            'thumbnail': info.get('thumb_url'),
+            'uploader': info.get('artist'),
+        }
+
+
+class BandcampAlbumIE(InfoExtractor):
+    IE_NAME = 'Bandcamp:album'
+    _VALID_URL = r'https?://(?:(?P<subdomain>[^.]+)\.)?bandcamp\.com(?:/album/(?P<album_id>[^?#]+)|/?(?:$|[?#]))'
+
+    _TESTS = [{
+        'url': 'http://blazo.bandcamp.com/album/jazz-format-mixtape-vol-1',
+        'playlist': [
+            {
+                'md5': '39bc1eded3476e927c724321ddf116cf',
+                'info_dict': {
+                    'id': '1353101989',
+                    'ext': 'mp3',
+                    'title': 'Intro',
+                }
+            },
+            {
+                'md5': '1a2c32e2691474643e912cc6cd4bffaa',
+                'info_dict': {
+                    'id': '38097443',
+                    'ext': 'mp3',
+                    'title': 'Kero One - Keep It Alive (Blazo remix)',
+                }
+            },
+        ],
+        'info_dict': {
+            'title': 'Jazz Format Mixtape vol.1',
+            'id': 'jazz-format-mixtape-vol-1',
+            'uploader_id': 'blazo',
+        },
+        'params': {
+            'playlistend': 2
+        },
+        'skip': 'Bandcamp imposes download limits.'
+    }, {
+        'url': 'http://nightbringer.bandcamp.com/album/hierophany-of-the-open-grave',
+        'info_dict': {
+            'title': 'Hierophany of the Open Grave',
+            'uploader_id': 'nightbringer',
+            'id': 'hierophany-of-the-open-grave',
+        },
+        'playlist_mincount': 9,
+    }, {
+        'url': 'http://dotscale.bandcamp.com',
+        'info_dict': {
+            'title': 'Loom',
+            'id': 'dotscale',
+            'uploader_id': 'dotscale',
+        },
+        'playlist_mincount': 7,
+    }]
+
+    def _real_extract(self, url):
+        mobj = re.match(self._VALID_URL, url)
+        uploader_id = mobj.group('subdomain')
+        album_id = mobj.group('album_id')
+        playlist_id = album_id or uploader_id
+        webpage = self._download_webpage(url, playlist_id)
+        tracks_paths = re.findall(r'<a href="(.*?)" itemprop="url">', webpage)
+        if not tracks_paths:
+            raise ExtractorError('The page doesn\'t contain any tracks')
+        entries = [
+            self.url_result(compat_urlparse.urljoin(url, t_path), ie=BandcampIE.ie_key())
+            for t_path in tracks_paths]
+        title = self._search_regex(
+            r'album_title\s*:\s*"(.*?)"', webpage, 'title', fatal=False)
+        return {
+            '_type': 'playlist',
+            'uploader_id': uploader_id,
+            'id': playlist_id,
+            'title': title,
+            'entries': entries,
+        }

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/bbccouk.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/bbccouk.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/bbccouk.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,379 @@
+from __future__ import unicode_literals
+
+import xml.etree.ElementTree
+
+from .common import InfoExtractor
+from ..utils import (
+    ExtractorError,
+    int_or_none,
+)
+from ..compat import compat_HTTPError
+
+
+class BBCCoUkIE(InfoExtractor):
+    IE_NAME = 'bbc.co.uk'
+    IE_DESC = 'BBC iPlayer'
+    _VALID_URL = r'https?://(?:www\.)?bbc\.co\.uk/(?:(?:(?:programmes|iplayer(?:/[^/]+)?/(?:episode|playlist))/)|music/clips[/#])(?P<id>[\da-z]{8})'
+
+    _TESTS = [
+        {
+            'url': 'http://www.bbc.co.uk/programmes/b039g8p7',
+            'info_dict': {
+                'id': 'b039d07m',
+                'ext': 'flv',
+                'title': 'Kaleidoscope, Leonard Cohen',
+                'description': 'The Canadian poet and songwriter reflects on his musical career.',
+                'duration': 1740,
+            },
+            'params': {
+                # rtmp download
+                'skip_download': True,
+            }
+        },
+        {
+            'url': 'http://www.bbc.co.uk/iplayer/episode/b00yng5w/The_Man_in_Black_Series_3_The_Printed_Name/',
+            'info_dict': {
+                'id': 'b00yng1d',
+                'ext': 'flv',
+                'title': 'The Man in Black: Series 3: The Printed Name',
+                'description': "Mark Gatiss introduces Nicholas Pierpan's chilling tale of a writer's devilish pact with a mysterious man. Stars Ewan Bailey.",
+                'duration': 1800,
+            },
+            'params': {
+                # rtmp download
+                'skip_download': True,
+            },
+            'skip': 'Episode is no longer available on BBC iPlayer Radio',
+        },
+        {
+            'url': 'http://www.bbc.co.uk/iplayer/episode/b03vhd1f/The_Voice_UK_Series_3_Blind_Auditions_5/',
+            'info_dict': {
+                'id': 'b00yng1d',
+                'ext': 'flv',
+                'title': 'The Voice UK: Series 3: Blind Auditions 5',
+                'description': "Emma Willis and Marvin Humes present the fifth set of blind auditions in the singing competition, as the coaches continue to build their teams based on voice alone.",
+                'duration': 5100,
+            },
+            'params': {
+                # rtmp download
+                'skip_download': True,
+            },
+            'skip': 'Currently BBC iPlayer TV programmes are available to play in the UK only',
+        },
+        {
+            'url': 'http://www.bbc.co.uk/iplayer/episode/p026c7jt/tomorrows-worlds-the-unearthly-history-of-science-fiction-2-invasion',
+            'info_dict': {
+                'id': 'b03k3pb7',
+                'ext': 'flv',
+                'title': "Tomorrow's Worlds: The Unearthly History of Science Fiction",
+                'description': '2. Invasion',
+                'duration': 3600,
+            },
+            'params': {
+                # rtmp download
+                'skip_download': True,
+            },
+            'skip': 'Currently BBC iPlayer TV programmes are available to play in the UK only',
+        }, {
+            'url': 'http://www.bbc.co.uk/programmes/b04v20dw',
+            'info_dict': {
+                'id': 'b04v209v',
+                'ext': 'flv',
+                'title': 'Pete Tong, The Essential New Tune Special',
+                'description': "Pete has a very special mix - all of 2014's Essential New Tunes!",
+                'duration': 10800,
+            },
+            'params': {
+                # rtmp download
+                'skip_download': True,
+            }
+        }, {
+            'url': 'http://www.bbc.co.uk/music/clips/p02frcc3',
+            'note': 'Audio',
+            'info_dict': {
+                'id': 'p02frcch',
+                'ext': 'flv',
+                'title': 'Pete Tong, Past, Present and Future Special, Madeon - After Hours mix',
+                'description': 'French house superstar Madeon takes us out of the club and onto the after party.',
+                'duration': 3507,
+            },
+            'params': {
+                # rtmp download
+                'skip_download': True,
+            }
+        }, {
+            'url': 'http://www.bbc.co.uk/music/clips/p025c0zz',
+            'note': 'Video',
+            'info_dict': {
+                'id': 'p025c103',
+                'ext': 'flv',
+                'title': 'Reading and Leeds Festival, 2014, Rae Morris - Closer (Live on BBC Three)',
+                'description': 'Rae Morris performs Closer for BBC Three at Reading 2014',
+                'duration': 226,
+            },
+            'params': {
+                # rtmp download
+                'skip_download': True,
+            }
+        }, {
+            'url': 'http://www.bbc.co.uk/iplayer/episode/b054fn09/ad/natural-world-20152016-2-super-powered-owls',
+            'info_dict': {
+                'id': 'p02n76xf',
+                'ext': 'flv',
+                'title': 'Natural World, 2015-2016: 2. Super Powered Owls',
+                'description': 'md5:e4db5c937d0e95a7c6b5e654d429183d',
+                'duration': 3540,
+            },
+            'params': {
+                # rtmp download
+                'skip_download': True,
+            },
+            'skip': 'geolocation',
+        }, {
+            'url': 'http://www.bbc.co.uk/iplayer/episode/b05zmgwn/royal-academy-summer-exhibition',
+            'info_dict': {
+                'id': 'b05zmgw1',
+                'ext': 'flv',
+                'description': 'Kirsty Wark and Morgan Quaintance visit the Royal Academy as it prepares for its annual artistic extravaganza, meeting people who have come together to make the show unique.',
+                'title': 'Royal Academy Summer Exhibition',
+                'duration': 3540,
+            },
+            'params': {
+                # rtmp download
+                'skip_download': True,
+            },
+            'skip': 'geolocation',
+        }, {
+            'url': 'http://www.bbc.co.uk/iplayer/playlist/p01dvks4',
+            'only_matching': True,
+        }, {
+            'url': 'http://www.bbc.co.uk/music/clips#p02frcc3',
+            'only_matching': True,
+        }, {
+            'url': 'http://www.bbc.co.uk/iplayer/cbeebies/episode/b0480276/bing-14-atchoo',
+            'only_matching': True,
+        }
+    ]
+
+    def _extract_asx_playlist(self, connection, programme_id):
+        asx = self._download_xml(connection.get('href'), programme_id, 'Downloading ASX playlist')
+        return [ref.get('href') for ref in asx.findall('./Entry/ref')]
+
+    def _extract_connection(self, connection, programme_id):
+        formats = []
+        protocol = connection.get('protocol')
+        supplier = connection.get('supplier')
+        if protocol == 'http':
+            href = connection.get('href')
+            # ASX playlist
+            if supplier == 'asx':
+                for i, ref in enumerate(self._extract_asx_playlist(connection, programme_id)):
+                    formats.append({
+                        'url': ref,
+                        'format_id': 'ref%s_%s' % (i, supplier),
+                    })
+            # Direct link
+            else:
+                formats.append({
+                    'url': href,
+                    'format_id': supplier,
+                })
+        elif protocol == 'rtmp':
+            application = connection.get('application', 'ondemand')
+            auth_string = connection.get('authString')
+            identifier = connection.get('identifier')
+            server = connection.get('server')
+            formats.append({
+                'url': '%s://%s/%s?%s' % (protocol, server, application, auth_string),
+                'play_path': identifier,
+                'app': '%s?%s' % (application, auth_string),
+                'page_url': 'http://www.bbc.co.uk',
+                'player_url': 'http://www.bbc.co.uk/emp/releases/iplayer/revisions/617463_618125_4/617463_618125_4_emp.swf',
+                'rtmp_live': False,
+                'ext': 'flv',
+                'format_id': supplier,
+            })
+        return formats
+
+    def _extract_items(self, playlist):
+        return playlist.findall('./{http://bbc.co.uk/2008/emp/playlist}item')
+
+    def _extract_medias(self, media_selection):
+        error = media_selection.find('./{http://bbc.co.uk/2008/mp/mediaselection}error')
+        if error is not None:
+            raise ExtractorError(
+                '%s returned error: %s' % (self.IE_NAME, error.get('id')), expected=True)
+        return media_selection.findall('./{http://bbc.co.uk/2008/mp/mediaselection}media')
+
+    def _extract_connections(self, media):
+        return media.findall('./{http://bbc.co.uk/2008/mp/mediaselection}connection')
+
+    def _extract_video(self, media, programme_id):
+        formats = []
+        vbr = int(media.get('bitrate'))
+        vcodec = media.get('encoding')
+        service = media.get('service')
+        width = int(media.get('width'))
+        height = int(media.get('height'))
+        file_size = int(media.get('media_file_size'))
+        for connection in self._extract_connections(media):
+            conn_formats = self._extract_connection(connection, programme_id)
+            for format in conn_formats:
+                format.update({
+                    'format_id': '%s_%s' % (service, format['format_id']),
+                    'width': width,
+                    'height': height,
+                    'vbr': vbr,
+                    'vcodec': vcodec,
+                    'filesize': file_size,
+                })
+            formats.extend(conn_formats)
+        return formats
+
+    def _extract_audio(self, media, programme_id):
+        formats = []
+        abr = int(media.get('bitrate'))
+        acodec = media.get('encoding')
+        service = media.get('service')
+        for connection in self._extract_connections(media):
+            conn_formats = self._extract_connection(connection, programme_id)
+            for format in conn_formats:
+                format.update({
+                    'format_id': '%s_%s' % (service, format['format_id']),
+                    'abr': abr,
+                    'acodec': acodec,
+                })
+            formats.extend(conn_formats)
+        return formats
+
+    def _get_subtitles(self, media, programme_id):
+        subtitles = {}
+        for connection in self._extract_connections(media):
+            captions = self._download_xml(connection.get('href'), programme_id, 'Downloading captions')
+            lang = captions.get('{http://www.w3.org/XML/1998/namespace}lang', 'en')
+            subtitles[lang] = [
+                {
+                    'url': connection.get('href'),
+                    'ext': 'ttml',
+                },
+            ]
+        return subtitles
+
+    def _download_media_selector(self, programme_id):
+        try:
+            media_selection = self._download_xml(
+                'http://open.live.bbc.co.uk/mediaselector/5/select/version/2.0/mediaset/pc/vpid/%s' % programme_id,
+                programme_id, 'Downloading media selection XML')
+        except ExtractorError as ee:
+            if isinstance(ee.cause, compat_HTTPError) and ee.cause.code == 403:
+                media_selection = xml.etree.ElementTree.fromstring(ee.cause.read().decode('utf-8'))
+            else:
+                raise
+
+        formats = []
+        subtitles = None
+
+        for media in self._extract_medias(media_selection):
+            kind = media.get('kind')
+            if kind == 'audio':
+                formats.extend(self._extract_audio(media, programme_id))
+            elif kind == 'video':
+                formats.extend(self._extract_video(media, programme_id))
+            elif kind == 'captions':
+                subtitles = self.extract_subtitles(media, programme_id)
+
+        return formats, subtitles
+
+    def _download_playlist(self, playlist_id):
+        try:
+            playlist = self._download_json(
+                'http://www.bbc.co.uk/programmes/%s/playlist.json' % playlist_id,
+                playlist_id, 'Downloading playlist JSON')
+
+            version = playlist.get('defaultAvailableVersion')
+            if version:
+                smp_config = version['smpConfig']
+                title = smp_config['title']
+                description = smp_config['summary']
+                for item in smp_config['items']:
+                    kind = item['kind']
+                    if kind != 'programme' and kind != 'radioProgramme':
+                        continue
+                    programme_id = item.get('vpid')
+                    duration = int(item.get('duration'))
+                    formats, subtitles = self._download_media_selector(programme_id)
+                return programme_id, title, description, duration, formats, subtitles
+        except ExtractorError as ee:
+            if not (isinstance(ee.cause, compat_HTTPError) and ee.cause.code == 404):
+                raise
+
+        # fallback to legacy playlist
+        playlist = self._download_xml(
+            'http://www.bbc.co.uk/iplayer/playlist/%s' % playlist_id,
+            playlist_id, 'Downloading legacy playlist XML')
+
+        no_items = playlist.find('./{http://bbc.co.uk/2008/emp/playlist}noItems')
+        if no_items is not None:
+            reason = no_items.get('reason')
+            if reason == 'preAvailability':
+                msg = 'Episode %s is not yet available' % playlist_id
+            elif reason == 'postAvailability':
+                msg = 'Episode %s is no longer available' % playlist_id
+            elif reason == 'noMedia':
+                msg = 'Episode %s is not currently available' % playlist_id
+            else:
+                msg = 'Episode %s is not available: %s' % (playlist_id, reason)
+            raise ExtractorError(msg, expected=True)
+
+        for item in self._extract_items(playlist):
+            kind = item.get('kind')
+            if kind != 'programme' and kind != 'radioProgramme':
+                continue
+            title = playlist.find('./{http://bbc.co.uk/2008/emp/playlist}title').text
+            description = playlist.find('./{http://bbc.co.uk/2008/emp/playlist}summary').text
+            programme_id = item.get('identifier')
+            duration = int(item.get('duration'))
+            formats, subtitles = self._download_media_selector(programme_id)
+
+        return programme_id, title, description, duration, formats, subtitles
+
+    def _real_extract(self, url):
+        group_id = self._match_id(url)
+
+        webpage = self._download_webpage(url, group_id, 'Downloading video page')
+
+        programme_id = None
+
+        tviplayer = self._search_regex(
+            r'mediator\.bind\(({.+?})\s*,\s*document\.getElementById',
+            webpage, 'player', default=None)
+
+        if tviplayer:
+            player = self._parse_json(tviplayer, group_id).get('player', {})
+            duration = int_or_none(player.get('duration'))
+            programme_id = player.get('vpid')
+
+        if not programme_id:
+            programme_id = self._search_regex(
+                r'"vpid"\s*:\s*"([\da-z]{8})"', webpage, 'vpid', fatal=False, default=None)
+
+        if programme_id:
+            formats, subtitles = self._download_media_selector(programme_id)
+            title = self._og_search_title(webpage)
+            description = self._search_regex(
+                r'<p class="[^"]*medium-description[^"]*">([^<]+)</p>',
+                webpage, 'description', fatal=False)
+        else:
+            programme_id, title, description, duration, formats, subtitles = self._download_playlist(group_id)
+
+        self._sort_formats(formats)
+
+        return {
+            'id': programme_id,
+            'title': title,
+            'description': description,
+            'thumbnail': self._og_search_thumbnail(webpage, default=None),
+            'duration': duration,
+            'formats': formats,
+            'subtitles': subtitles,
+        }

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/beatportpro.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/beatportpro.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/beatportpro.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,103 @@
+# coding: utf-8
+from __future__ import unicode_literals
+
+import re
+
+from .common import InfoExtractor
+from ..compat import compat_str
+from ..utils import int_or_none
+
+
+class BeatportProIE(InfoExtractor):
+    _VALID_URL = r'https?://pro\.beatport\.com/track/(?P<display_id>[^/]+)/(?P<id>[0-9]+)'
+    _TESTS = [{
+        'url': 'https://pro.beatport.com/track/synesthesia-original-mix/5379371',
+        'md5': 'b3c34d8639a2f6a7f734382358478887',
+        'info_dict': {
+            'id': '5379371',
+            'display_id': 'synesthesia-original-mix',
+            'ext': 'mp4',
+            'title': 'Froxic - Synesthesia (Original Mix)',
+        },
+    }, {
+        'url': 'https://pro.beatport.com/track/love-and-war-original-mix/3756896',
+        'md5': 'e44c3025dfa38c6577fbaeb43da43514',
+        'info_dict': {
+            'id': '3756896',
+            'display_id': 'love-and-war-original-mix',
+            'ext': 'mp3',
+            'title': 'Wolfgang Gartner - Love & War (Original Mix)',
+        },
+    }, {
+        'url': 'https://pro.beatport.com/track/birds-original-mix/4991738',
+        'md5': 'a1fd8e8046de3950fd039304c186c05f',
+        'info_dict': {
+            'id': '4991738',
+            'display_id': 'birds-original-mix',
+            'ext': 'mp4',
+            'title': "Tos, Middle Milk, Mumblin' Johnsson - Birds (Original Mix)",
+        }
+    }]
+
+    def _real_extract(self, url):
+        mobj = re.match(self._VALID_URL, url)
+        track_id = mobj.group('id')
+        display_id = mobj.group('display_id')
+
+        webpage = self._download_webpage(url, display_id)
+
+        playables = self._parse_json(
+            self._search_regex(
+                r'window\.Playables\s*=\s*({.+?});', webpage,
+                'playables info', flags=re.DOTALL),
+            track_id)
+
+        track = next(t for t in playables['tracks'] if t['id'] == int(track_id))
+
+        title = ', '.join((a['name'] for a in track['artists'])) + ' - ' + track['name']
+        if track['mix']:
+            title += ' (' + track['mix'] + ')'
+
+        formats = []
+        for ext, info in track['preview'].items():
+            if not info['url']:
+                continue
+            fmt = {
+                'url': info['url'],
+                'ext': ext,
+                'format_id': ext,
+                'vcodec': 'none',
+            }
+            if ext == 'mp3':
+                fmt['preference'] = 0
+                fmt['acodec'] = 'mp3'
+                fmt['abr'] = 96
+                fmt['asr'] = 44100
+            elif ext == 'mp4':
+                fmt['preference'] = 1
+                fmt['acodec'] = 'aac'
+                fmt['abr'] = 96
+                fmt['asr'] = 44100
+            formats.append(fmt)
+        self._sort_formats(formats)
+
+        images = []
+        for name, info in track['images'].items():
+            image_url = info.get('url')
+            if name == 'dynamic' or not image_url:
+                continue
+            image = {
+                'id': name,
+                'url': image_url,
+                'height': int_or_none(info.get('height')),
+                'width': int_or_none(info.get('width')),
+            }
+            images.append(image)
+
+        return {
+            'id': compat_str(track.get('id')) or track_id,
+            'display_id': track.get('slug') or display_id,
+            'title': title,
+            'formats': formats,
+            'thumbnails': images,
+        }

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/beeg.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/beeg.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/beeg.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,65 @@
+from __future__ import unicode_literals
+
+import re
+
+from .common import InfoExtractor
+
+
+class BeegIE(InfoExtractor):
+    _VALID_URL = r'https?://(?:www\.)?beeg\.com/(?P<id>\d+)'
+    _TEST = {
+        'url': 'http://beeg.com/5416503',
+        'md5': '1bff67111adb785c51d1b42959ec10e5',
+        'info_dict': {
+            'id': '5416503',
+            'ext': 'mp4',
+            'title': 'Sultry Striptease',
+            'description': 'md5:6db3c6177972822aaba18652ff59c773',
+            'categories': list,  # NSFW
+            'thumbnail': 're:https?://.*\.jpg$',
+            'age_limit': 18,
+        }
+    }
+
+    def _real_extract(self, url):
+        mobj = re.match(self._VALID_URL, url)
+        video_id = mobj.group('id')
+
+        webpage = self._download_webpage(url, video_id)
+
+        quality_arr = self._search_regex(
+            r'(?s)var\s+qualityArr\s*=\s*{\s*(.+?)\s*}', webpage, 'quality formats')
+
+        formats = [{
+            'url': fmt[1],
+            'format_id': fmt[0],
+            'height': int(fmt[0][:-1]),
+        } for fmt in re.findall(r"'([^']+)'\s*:\s*'([^']+)'", quality_arr)]
+
+        self._sort_formats(formats)
+
+        title = self._html_search_regex(
+            r'<title>([^<]+)\s*-\s*beeg\.?</title>', webpage, 'title')
+
+        description = self._html_search_regex(
+            r'<meta name="description" content="([^"]*)"',
+            webpage, 'description', fatal=False)
+        thumbnail = self._html_search_regex(
+            r'\'previewer.url\'\s*:\s*"([^"]*)"',
+            webpage, 'thumbnail', fatal=False)
+
+        categories_str = self._html_search_regex(
+            r'<meta name="keywords" content="([^"]+)"', webpage, 'categories', fatal=False)
+        categories = (
+            None if categories_str is None
+            else categories_str.split(','))
+
+        return {
+            'id': video_id,
+            'title': title,
+            'description': description,
+            'thumbnail': thumbnail,
+            'categories': categories,
+            'formats': formats,
+            'age_limit': 18,
+        }

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/behindkink.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/behindkink.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/behindkink.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,46 @@
+# coding: utf-8
+from __future__ import unicode_literals
+
+import re
+
+from .common import InfoExtractor
+from ..utils import url_basename
+
+
+class BehindKinkIE(InfoExtractor):
+    _VALID_URL = r'http://(?:www\.)?behindkink\.com/(?P<year>[0-9]{4})/(?P<month>[0-9]{2})/(?P<day>[0-9]{2})/(?P<id>[^/#?_]+)'
+    _TEST = {
+        'url': 'http://www.behindkink.com/2014/12/05/what-are-you-passionate-about-marley-blaze/',
+        'md5': '507b57d8fdcd75a41a9a7bdb7989c762',
+        'info_dict': {
+            'id': '37127',
+            'ext': 'mp4',
+            'title': 'What are you passionate about – Marley Blaze',
+            'description': 'md5:aee8e9611b4ff70186f752975d9b94b4',
+            'upload_date': '20141205',
+            'thumbnail': 'http://www.behindkink.com/wp-content/uploads/2014/12/blaze-1.jpg',
+            'age_limit': 18,
+        }
+    }
+
+    def _real_extract(self, url):
+        mobj = re.match(self._VALID_URL, url)
+        display_id = mobj.group('id')
+
+        webpage = self._download_webpage(url, display_id)
+
+        video_url = self._search_regex(
+            r'<source src="([^"]+)"', webpage, 'video URL')
+        video_id = url_basename(video_url).split('_')[0]
+        upload_date = mobj.group('year') + mobj.group('month') + mobj.group('day')
+
+        return {
+            'id': video_id,
+            'display_id': display_id,
+            'url': video_url,
+            'title': self._og_search_title(webpage),
+            'thumbnail': self._og_search_thumbnail(webpage),
+            'description': self._og_search_description(webpage),
+            'upload_date': upload_date,
+            'age_limit': 18,
+        }

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/bet.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/bet.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/bet.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,108 @@
+from __future__ import unicode_literals
+
+from .common import InfoExtractor
+from ..compat import compat_urllib_parse_unquote
+from ..utils import (
+    xpath_text,
+    xpath_with_ns,
+    int_or_none,
+    parse_iso8601,
+)
+
+
+class BetIE(InfoExtractor):
+    _VALID_URL = r'https?://(?:www\.)?bet\.com/(?:[^/]+/)+(?P<id>.+?)\.html'
+    _TESTS = [
+        {
+            'url': 'http://www.bet.com/news/politics/2014/12/08/in-bet-exclusive-obama-talks-race-and-racism.html',
+            'info_dict': {
+                'id': 'news/national/2014/a-conversation-with-president-obama',
+                'display_id': 'in-bet-exclusive-obama-talks-race-and-racism',
+                'ext': 'flv',
+                'title': 'A Conversation With President Obama',
+                'description': 'md5:699d0652a350cf3e491cd15cc745b5da',
+                'duration': 1534,
+                'timestamp': 1418075340,
+                'upload_date': '20141208',
+                'uploader': 'admin',
+                'thumbnail': 're:(?i)^https?://.*\.jpg$',
+            },
+            'params': {
+                # rtmp download
+                'skip_download': True,
+            },
+        },
+        {
+            'url': 'http://www.bet.com/video/news/national/2014/justice-for-ferguson-a-community-reacts.html',
+            'info_dict': {
+                'id': 'news/national/2014/justice-for-ferguson-a-community-reacts',
+                'display_id': 'justice-for-ferguson-a-community-reacts',
+                'ext': 'flv',
+                'title': 'Justice for Ferguson: A Community Reacts',
+                'description': 'A BET News special.',
+                'duration': 1696,
+                'timestamp': 1416942360,
+                'upload_date': '20141125',
+                'uploader': 'admin',
+                'thumbnail': 're:(?i)^https?://.*\.jpg$',
+            },
+            'params': {
+                # rtmp download
+                'skip_download': True,
+            },
+        }
+    ]
+
+    def _real_extract(self, url):
+        display_id = self._match_id(url)
+        webpage = self._download_webpage(url, display_id)
+
+        media_url = compat_urllib_parse_unquote(self._search_regex(
+            [r'mediaURL\s*:\s*"([^"]+)"', r"var\s+mrssMediaUrl\s*=\s*'([^']+)'"],
+            webpage, 'media URL'))
+
+        video_id = self._search_regex(
+            r'/video/(.*)/_jcr_content/', media_url, 'video id')
+
+        mrss = self._download_xml(media_url, display_id)
+
+        item = mrss.find('./channel/item')
+
+        NS_MAP = {
+            'dc': 'http://purl.org/dc/elements/1.1/',
+            'media': 'http://search.yahoo.com/mrss/',
+            'ka': 'http://kickapps.com/karss',
+        }
+
+        title = xpath_text(item, './title', 'title')
+        description = xpath_text(
+            item, './description', 'description', fatal=False)
+
+        timestamp = parse_iso8601(xpath_text(
+            item, xpath_with_ns('./dc:date', NS_MAP),
+            'upload date', fatal=False))
+        uploader = xpath_text(
+            item, xpath_with_ns('./dc:creator', NS_MAP),
+            'uploader', fatal=False)
+
+        media_content = item.find(
+            xpath_with_ns('./media:content', NS_MAP))
+        duration = int_or_none(media_content.get('duration'))
+        smil_url = media_content.get('url')
+
+        thumbnail = media_content.find(
+            xpath_with_ns('./media:thumbnail', NS_MAP)).get('url')
+
+        formats = self._extract_smil_formats(smil_url, display_id)
+
+        return {
+            'id': video_id,
+            'display_id': display_id,
+            'title': title,
+            'description': description,
+            'thumbnail': thumbnail,
+            'timestamp': timestamp,
+            'uploader': uploader,
+            'duration': duration,
+            'formats': formats,
+        }

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/bild.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/bild.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/bild.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,42 @@
+# coding: utf-8
+from __future__ import unicode_literals
+
+from .common import InfoExtractor
+from ..utils import (
+    int_or_none,
+    fix_xml_ampersands,
+)
+
+
+class BildIE(InfoExtractor):
+    _VALID_URL = r'https?://(?:www\.)?bild\.de/(?:[^/]+/)+(?P<display_id>[^/]+)-(?P<id>\d+)(?:,auto=true)?\.bild\.html'
+    IE_DESC = 'Bild.de'
+    _TEST = {
+        'url': 'http://www.bild.de/video/clip/apple-ipad-air/das-koennen-die-neuen-ipads-38184146.bild.html',
+        'md5': 'dd495cbd99f2413502a1713a1156ac8a',
+        'info_dict': {
+            'id': '38184146',
+            'ext': 'mp4',
+            'title': 'BILD hat sie getestet',
+            'thumbnail': 're:^https?://.*\.jpg$',
+            'duration': 196,
+            'description': 'Mit dem iPad Air 2 und dem iPad Mini 3 hat Apple zwei neue Tablet-Modelle präsentiert. BILD-Reporter Sven Stein durfte die Geräte bereits testen. ',
+        }
+    }
+
+    def _real_extract(self, url):
+        video_id = self._match_id(url)
+
+        xml_url = url.split(".bild.html")[0] + ",view=xml.bild.xml"
+        doc = self._download_xml(xml_url, video_id, transform_source=fix_xml_ampersands)
+
+        duration = int_or_none(doc.attrib.get('duration'), scale=1000)
+
+        return {
+            'id': video_id,
+            'title': doc.attrib['ueberschrift'],
+            'description': doc.attrib.get('text'),
+            'url': doc.attrib['src'],
+            'thumbnail': doc.attrib.get('img'),
+            'duration': duration,
+        }

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/bilibili.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/bilibili.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/bilibili.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,135 @@
+# coding: utf-8
+from __future__ import unicode_literals
+
+import re
+import itertools
+import json
+import xml.etree.ElementTree as ET
+
+from .common import InfoExtractor
+from ..utils import (
+    int_or_none,
+    unified_strdate,
+    ExtractorError,
+)
+
+
+class BiliBiliIE(InfoExtractor):
+    _VALID_URL = r'http://www\.bilibili\.(?:tv|com)/video/av(?P<id>[0-9]+)/'
+
+    _TESTS = [{
+        'url': 'http://www.bilibili.tv/video/av1074402/',
+        'md5': '2c301e4dab317596e837c3e7633e7d86',
+        'info_dict': {
+            'id': '1074402_part1',
+            'ext': 'flv',
+            'title': '【金坷垃】金泡沫',
+            'duration': 308,
+            'upload_date': '20140420',
+            'thumbnail': 're:^https?://.+\.jpg',
+        },
+    }, {
+        'url': 'http://www.bilibili.com/video/av1041170/',
+        'info_dict': {
+            'id': '1041170',
+            'title': '【BD1080P】刀语【诸神&异域】',
+        },
+        'playlist_count': 9,
+    }]
+
+    def _real_extract(self, url):
+        video_id = self._match_id(url)
+        webpage = self._download_webpage(url, video_id)
+
+        if self._search_regex(r'(此视频不存在或被删除)', webpage, 'error message', default=None):
+            raise ExtractorError('The video does not exist or was deleted', expected=True)
+        video_code = self._search_regex(
+            r'(?s)<div itemprop="video".*?>(.*?)</div>', webpage, 'video code')
+
+        title = self._html_search_meta(
+            'media:title', video_code, 'title', fatal=True)
+        duration_str = self._html_search_meta(
+            'duration', video_code, 'duration')
+        if duration_str is None:
+            duration = None
+        else:
+            duration_mobj = re.match(
+                r'^T(?:(?P<hours>[0-9]+)H)?(?P<minutes>[0-9]+)M(?P<seconds>[0-9]+)S$',
+                duration_str)
+            duration = (
+                int_or_none(duration_mobj.group('hours'), default=0) * 3600 +
+                int(duration_mobj.group('minutes')) * 60 +
+                int(duration_mobj.group('seconds')))
+        upload_date = unified_strdate(self._html_search_meta(
+            'uploadDate', video_code, fatal=False))
+        thumbnail = self._html_search_meta(
+            'thumbnailUrl', video_code, 'thumbnail', fatal=False)
+
+        cid = self._search_regex(r'cid=(\d+)', webpage, 'cid')
+
+        entries = []
+
+        lq_page = self._download_webpage(
+            'http://interface.bilibili.com/v_cdn_play?appkey=1&cid=%s' % cid,
+            video_id,
+            note='Downloading LQ video info'
+        )
+        try:
+            err_info = json.loads(lq_page)
+            raise ExtractorError(
+                'BiliBili said: ' + err_info['error_text'], expected=True)
+        except ValueError:
+            pass
+
+        lq_doc = ET.fromstring(lq_page)
+        lq_durls = lq_doc.findall('./durl')
+
+        hq_doc = self._download_xml(
+            'http://interface.bilibili.com/playurl?appkey=1&cid=%s' % cid,
+            video_id,
+            note='Downloading HQ video info',
+            fatal=False,
+        )
+        if hq_doc is not False:
+            hq_durls = hq_doc.findall('./durl')
+            assert len(lq_durls) == len(hq_durls)
+        else:
+            hq_durls = itertools.repeat(None)
+
+        i = 1
+        for lq_durl, hq_durl in zip(lq_durls, hq_durls):
+            formats = [{
+                'format_id': 'lq',
+                'quality': 1,
+                'url': lq_durl.find('./url').text,
+                'filesize': int_or_none(
+                    lq_durl.find('./size'), get_attr='text'),
+            }]
+            if hq_durl is not None:
+                formats.append({
+                    'format_id': 'hq',
+                    'quality': 2,
+                    'ext': 'flv',
+                    'url': hq_durl.find('./url').text,
+                    'filesize': int_or_none(
+                        hq_durl.find('./size'), get_attr='text'),
+                })
+            self._sort_formats(formats)
+
+            entries.append({
+                'id': '%s_part%d' % (video_id, i),
+                'title': title,
+                'formats': formats,
+                'duration': duration,
+                'upload_date': upload_date,
+                'thumbnail': thumbnail,
+            })
+
+            i += 1
+
+        return {
+            '_type': 'multi_video',
+            'entries': entries,
+            'id': video_id,
+            'title': title
+        }

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/blinkx.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/blinkx.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/blinkx.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,86 @@
+from __future__ import unicode_literals
+
+import json
+
+from .common import InfoExtractor
+from ..utils import (
+    remove_start,
+    int_or_none,
+)
+
+
+class BlinkxIE(InfoExtractor):
+    _VALID_URL = r'(?:https?://(?:www\.)blinkx\.com/#?ce/|blinkx:)(?P<id>[^?]+)'
+    IE_NAME = 'blinkx'
+
+    _TEST = {
+        'url': 'http://www.blinkx.com/ce/Da0Gw3xc5ucpNduzLuDDlv4WC9PuI4fDi1-t6Y3LyfdY2SZS5Urbvn-UPJvrvbo8LTKTc67Wu2rPKSQDJyZeeORCR8bYkhs8lI7eqddznH2ofh5WEEdjYXnoRtj7ByQwt7atMErmXIeYKPsSDuMAAqJDlQZ-3Ff4HJVeH_s3Gh8oQ',
+        'md5': '337cf7a344663ec79bf93a526a2e06c7',
+        'info_dict': {
+            'id': 'Da0Gw3xc',
+            'ext': 'mp4',
+            'title': 'No Daily Show for John Oliver; HBO Show Renewed - IGN News',
+            'uploader': 'IGN News',
+            'upload_date': '20150217',
+            'timestamp': 1424215740,
+            'description': 'HBO has renewed Last Week Tonight With John Oliver for two more seasons.',
+            'duration': 47.743333,
+        },
+    }
+
+    def _real_extract(self, url):
+        video_id = self._match_id(url)
+        display_id = video_id[:8]
+
+        api_url = ('https://apib4.blinkx.com/api.php?action=play_video&;' +
+                   'video=%s' % video_id)
+        data_json = self._download_webpage(api_url, display_id)
+        data = json.loads(data_json)['api']['results'][0]
+        duration = None
+        thumbnails = []
+        formats = []
+        for m in data['media']:
+            if m['type'] == 'jpg':
+                thumbnails.append({
+                    'url': m['link'],
+                    'width': int(m['w']),
+                    'height': int(m['h']),
+                })
+            elif m['type'] == 'original':
+                duration = float(m['d'])
+            elif m['type'] == 'youtube':
+                yt_id = m['link']
+                self.to_screen('Youtube video detected: %s' % yt_id)
+                return self.url_result(yt_id, 'Youtube', video_id=yt_id)
+            elif m['type'] in ('flv', 'mp4'):
+                vcodec = remove_start(m['vcodec'], 'ff')
+                acodec = remove_start(m['acodec'], 'ff')
+                vbr = int_or_none(m.get('vbr') or m.get('vbitrate'), 1000)
+                abr = int_or_none(m.get('abr') or m.get('abitrate'), 1000)
+                tbr = vbr + abr if vbr and abr else None
+                format_id = '%s-%sk-%s' % (vcodec, tbr, m['w'])
+                formats.append({
+                    'format_id': format_id,
+                    'url': m['link'],
+                    'vcodec': vcodec,
+                    'acodec': acodec,
+                    'abr': abr,
+                    'vbr': vbr,
+                    'tbr': tbr,
+                    'width': int_or_none(m.get('w')),
+                    'height': int_or_none(m.get('h')),
+                })
+
+        self._sort_formats(formats)
+
+        return {
+            'id': display_id,
+            'fullid': video_id,
+            'title': data['title'],
+            'formats': formats,
+            'uploader': data['channel_name'],
+            'timestamp': data['pubdate_epoch'],
+            'description': data.get('description'),
+            'thumbnails': thumbnails,
+            'duration': duration,
+        }

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/bliptv.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/bliptv.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/bliptv.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,293 @@
+from __future__ import unicode_literals
+
+import re
+
+from .common import InfoExtractor
+
+from ..compat import (
+    compat_str,
+    compat_urllib_request,
+    compat_urlparse,
+)
+from ..utils import (
+    clean_html,
+    int_or_none,
+    parse_iso8601,
+    unescapeHTML,
+    xpath_text,
+    xpath_with_ns,
+)
+
+
+class BlipTVIE(InfoExtractor):
+    _VALID_URL = r'https?://(?:\w+\.)?blip\.tv/(?:(?:.+-|rss/flash/)(?P<id>\d+)|((?:play/|api\.swf#)(?P<lookup_id>[\da-zA-Z+_]+)))'
+
+    _TESTS = [
+        {
+            'url': 'http://blip.tv/cbr/cbr-exclusive-gotham-city-imposters-bats-vs-jokerz-short-3-5796352',
+            'md5': '80baf1ec5c3d2019037c1c707d676b9f',
+            'info_dict': {
+                'id': '5779306',
+                'ext': 'm4v',
+                'title': 'CBR EXCLUSIVE: "Gotham City Imposters" Bats VS Jokerz Short 3',
+                'description': 'md5:9bc31f227219cde65e47eeec8d2dc596',
+                'timestamp': 1323138843,
+                'upload_date': '20111206',
+                'uploader': 'cbr',
+                'uploader_id': '679425',
+                'duration': 81,
+            }
+        },
+        {
+            # https://github.com/rg3/youtube-dl/pull/2274
+            'note': 'Video with subtitles',
+            'url': 'http://blip.tv/play/h6Uag5OEVgI.html',
+            'md5': '309f9d25b820b086ca163ffac8031806',
+            'info_dict': {
+                'id': '6586561',
+                'ext': 'mp4',
+                'title': 'Red vs. Blue Season 11 Episode 1',
+                'description': 'One-Zero-One',
+                'timestamp': 1371261608,
+                'upload_date': '20130615',
+                'uploader': 'redvsblue',
+                'uploader_id': '792887',
+                'duration': 279,
+            }
+        },
+        {
+            # https://bugzilla.redhat.com/show_bug.cgi?id=967465
+            'url': 'http://a.blip.tv/api.swf#h6Uag5KbVwI',
+            'md5': '314e87b1ebe7a48fcbfdd51b791ce5a6',
+            'info_dict': {
+                'id': '6573122',
+                'ext': 'mov',
+                'upload_date': '20130520',
+                'description': 'Two hapless space marines argue over what to do when they realize they have an astronomically huge problem on their hands.',
+                'title': 'Red vs. Blue Season 11 Trailer',
+                'timestamp': 1369029609,
+                'uploader': 'redvsblue',
+                'uploader_id': '792887',
+            }
+        },
+        {
+            'url': 'http://blip.tv/play/gbk766dkj4Yn',
+            'md5': 'fe0a33f022d49399a241e84a8ea8b8e3',
+            'info_dict': {
+                'id': '1749452',
+                'ext': 'mp4',
+                'upload_date': '20090208',
+                'description': 'Witness the first appearance of the Nostalgia Critic character, as Doug reviews the movie Transformers.',
+                'title': 'Nostalgia Critic: Transformers',
+                'timestamp': 1234068723,
+                'uploader': 'NostalgiaCritic',
+                'uploader_id': '246467',
+            }
+        },
+        {
+            # https://github.com/rg3/youtube-dl/pull/4404
+            'note': 'Audio only',
+            'url': 'http://blip.tv/hilarios-productions/weekly-manga-recap-kingdom-7119982',
+            'md5': '76c0a56f24e769ceaab21fbb6416a351',
+            'info_dict': {
+                'id': '7103299',
+                'ext': 'flv',
+                'title': 'Weekly Manga Recap: Kingdom',
+                'description': 'And then Shin breaks the enemy line, and he&apos;s all like HWAH! And then he slices a guy and it&apos;s all like FWASHING! And... it&apos;s really hard to describe the best parts of this series without breaking down into sound effects, okay?',
+                'timestamp': 1417660321,
+                'upload_date': '20141204',
+                'uploader': 'The Rollo T',
+                'uploader_id': '407429',
+                'duration': 7251,
+                'vcodec': 'none',
+            }
+        },
+        {
+            # missing duration
+            'url': 'http://blip.tv/rss/flash/6700880',
+            'info_dict': {
+                'id': '6684191',
+                'ext': 'm4v',
+                'title': 'Cowboy Bebop: Gateway Shuffle Review',
+                'description': 'md5:3acc480c0f9ae157f5fe88547ecaf3f8',
+                'timestamp': 1386639757,
+                'upload_date': '20131210',
+                'uploader': 'sfdebris',
+                'uploader_id': '706520',
+            }
+        }
+    ]
+
+    @staticmethod
+    def _extract_url(webpage):
+        mobj = re.search(r'<meta\s[^>]*https?://api\.blip\.tv/\w+/redirect/\w+/(\d+)', webpage)
+        if mobj:
+            return 'http://blip.tv/a/a-' + mobj.group(1)
+        mobj = re.search(r'<(?:iframe|embed|object)\s[^>]*(https?://(?:\w+\.)?blip\.tv/(?:play/|api\.swf#)[a-zA-Z0-9_]+)', webpage)
+        if mobj:
+            return mobj.group(1)
+
+    def _real_extract(self, url):
+        mobj = re.match(self._VALID_URL, url)
+        lookup_id = mobj.group('lookup_id')
+
+        # See https://github.com/rg3/youtube-dl/issues/857 and
+        # https://github.com/rg3/youtube-dl/issues/4197
+        if lookup_id:
+            urlh = self._request_webpage(
+                'http://blip.tv/play/%s' % lookup_id, lookup_id, 'Resolving lookup id')
+            url = compat_urlparse.urlparse(urlh.geturl())
+            qs = compat_urlparse.parse_qs(url.query)
+            mobj = re.match(self._VALID_URL, qs['file'][0])
+
+        video_id = mobj.group('id')
+
+        rss = self._download_xml('http://blip.tv/rss/flash/%s' % video_id, video_id, 'Downloading video RSS')
+
+        def _x(p):
+            return xpath_with_ns(p, {
+                'blip': 'http://blip.tv/dtd/blip/1.0',
+                'media': 'http://search.yahoo.com/mrss/',
+                'itunes': 'http://www.itunes.com/dtds/podcast-1.0.dtd',
+            })
+
+        item = rss.find('channel/item')
+
+        video_id = xpath_text(item, _x('blip:item_id'), 'video id') or lookup_id
+        title = xpath_text(item, 'title', 'title', fatal=True)
+        description = clean_html(xpath_text(item, _x('blip:puredescription'), 'description'))
+        timestamp = parse_iso8601(xpath_text(item, _x('blip:datestamp'), 'timestamp'))
+        uploader = xpath_text(item, _x('blip:user'), 'uploader')
+        uploader_id = xpath_text(item, _x('blip:userid'), 'uploader id')
+        duration = int_or_none(xpath_text(item, _x('blip:runtime'), 'duration'))
+        media_thumbnail = item.find(_x('media:thumbnail'))
+        thumbnail = (media_thumbnail.get('url') if media_thumbnail is not None
+                     else xpath_text(item, 'image', 'thumbnail'))
+        categories = [category.text for category in item.findall('category') if category is not None]
+
+        formats = []
+        subtitles_urls = {}
+
+        media_group = item.find(_x('media:group'))
+        for media_content in media_group.findall(_x('media:content')):
+            url = media_content.get('url')
+            role = media_content.get(_x('blip:role'))
+            msg = self._download_webpage(
+                url + '?showplayer=20140425131715&referrer=http://blip.tv&mask=7&skin=flashvars&view=url',
+                video_id, 'Resolving URL for %s' % role)
+            real_url = compat_urlparse.parse_qs(msg.strip())['message'][0]
+
+            media_type = media_content.get('type')
+            if media_type == 'text/srt' or url.endswith('.srt'):
+                LANGS = {
+                    'english': 'en',
+                }
+                lang = role.rpartition('-')[-1].strip().lower()
+                langcode = LANGS.get(lang, lang)
+                subtitles_urls[langcode] = url
+            elif media_type.startswith('video/'):
+                formats.append({
+                    'url': real_url,
+                    'format_id': role,
+                    'format_note': media_type,
+                    'vcodec': media_content.get(_x('blip:vcodec')) or 'none',
+                    'acodec': media_content.get(_x('blip:acodec')),
+                    'filesize': media_content.get('filesize'),
+                    'width': int_or_none(media_content.get('width')),
+                    'height': int_or_none(media_content.get('height')),
+                })
+        self._check_formats(formats, video_id)
+        self._sort_formats(formats)
+
+        subtitles = self.extract_subtitles(video_id, subtitles_urls)
+
+        return {
+            'id': video_id,
+            'title': title,
+            'description': description,
+            'timestamp': timestamp,
+            'uploader': uploader,
+            'uploader_id': uploader_id,
+            'duration': duration,
+            'thumbnail': thumbnail,
+            'categories': categories,
+            'formats': formats,
+            'subtitles': subtitles,
+        }
+
+    def _get_subtitles(self, video_id, subtitles_urls):
+        subtitles = {}
+        for lang, url in subtitles_urls.items():
+            # For some weird reason, blip.tv serves a video instead of subtitles
+            # when we request with a common UA
+            req = compat_urllib_request.Request(url)
+            req.add_header('User-Agent', 'youtube-dl')
+            subtitles[lang] = [{
+                # The extension is 'srt' but it's actually an 'ass' file
+                'ext': 'ass',
+                'data': self._download_webpage(req, None, note=False),
+            }]
+        return subtitles
+
+
+class BlipTVUserIE(InfoExtractor):
+    _VALID_URL = r'(?:(?:https?://(?:\w+\.)?blip\.tv/)|bliptvuser:)(?!api\.swf)([^/]+)/*$'
+    _PAGE_SIZE = 12
+    IE_NAME = 'blip.tv:user'
+    _TEST = {
+        'url': 'http://blip.tv/actone',
+        'info_dict': {
+            'id': 'actone',
+            'title': 'Act One: The Series',
+        },
+        'playlist_count': 5,
+    }
+
+    def _real_extract(self, url):
+        mobj = re.match(self._VALID_URL, url)
+        username = mobj.group(1)
+
+        page_base = 'http://m.blip.tv/pr/show_get_full_episode_list?users_id=%s&lite=0&esi=1'
+
+        page = self._download_webpage(url, username, 'Downloading user page')
+        mobj = re.search(r'data-users-id="([^"]+)"', page)
+        page_base = page_base % mobj.group(1)
+        title = self._og_search_title(page)
+
+        # Download video ids using BlipTV Ajax calls. Result size per
+        # query is limited (currently to 12 videos) so we need to query
+        # page by page until there are no video ids - it means we got
+        # all of them.
+
+        video_ids = []
+        pagenum = 1
+
+        while True:
+            url = page_base + "&page=" + str(pagenum)
+            page = self._download_webpage(
+                url, username, 'Downloading video ids from page %d' % pagenum)
+
+            # Extract video identifiers
+            ids_in_page = []
+
+            for mobj in re.finditer(r'href="/([^"]+)"', page):
+                if mobj.group(1) not in ids_in_page:
+                    ids_in_page.append(unescapeHTML(mobj.group(1)))
+
+            video_ids.extend(ids_in_page)
+
+            # A little optimization - if current page is not
+            # "full", ie. does not contain PAGE_SIZE video ids then
+            # we can assume that this page is the last one - there
+            # are no more ids on further pages - no need to query
+            # again.
+
+            if len(ids_in_page) < self._PAGE_SIZE:
+                break
+
+            pagenum += 1
+
+        urls = ['http://blip.tv/%s' % video_id for video_id in video_ids]
+        url_entries = [self.url_result(vurl, 'BlipTV') for vurl in urls]
+        return self.playlist_result(
+            url_entries, playlist_title=title, playlist_id=username)

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/bloomberg.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/bloomberg.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/bloomberg.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,44 @@
+from __future__ import unicode_literals
+
+import re
+
+from .common import InfoExtractor
+
+
+class BloombergIE(InfoExtractor):
+    _VALID_URL = r'https?://www\.bloomberg\.com/news/videos/[^/]+/(?P<id>[^/?#]+)'
+
+    _TEST = {
+        'url': 'http://www.bloomberg.com/news/videos/b/aaeae121-5949-481e-a1ce-4562db6f5df2',
+        # The md5 checksum changes
+        'info_dict': {
+            'id': 'qurhIVlJSB6hzkVi229d8g',
+            'ext': 'flv',
+            'title': 'Shah\'s Presentation on Foreign-Exchange Strategies',
+            'description': 'md5:a8ba0302912d03d246979735c17d2761',
+        },
+    }
+
+    def _real_extract(self, url):
+        name = self._match_id(url)
+        webpage = self._download_webpage(url, name)
+        video_id = self._search_regex(r'"bmmrId":"(.+?)"', webpage, 'id')
+        title = re.sub(': Video$', '', self._og_search_title(webpage))
+
+        embed_info = self._download_json(
+            'http://www.bloomberg.com/api/embed?id=%s' % video_id, video_id)
+        formats = []
+        for stream in embed_info['streams']:
+            if stream["muxing_format"] == "TS":
+                formats.extend(self._extract_m3u8_formats(stream['url'], video_id))
+            else:
+                formats.extend(self._extract_f4m_formats(stream['url'], video_id))
+        self._sort_formats(formats)
+
+        return {
+            'id': video_id,
+            'title': title,
+            'formats': formats,
+            'description': self._og_search_description(webpage),
+            'thumbnail': self._og_search_thumbnail(webpage),
+        }

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/bpb.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/bpb.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/bpb.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,37 @@
+# coding: utf-8
+from __future__ import unicode_literals
+
+from .common import InfoExtractor
+
+
+class BpbIE(InfoExtractor):
+    IE_DESC = 'Bundeszentrale für politische Bildung'
+    _VALID_URL = r'http://www\.bpb\.de/mediathek/(?P<id>[0-9]+)/'
+
+    _TEST = {
+        'url': 'http://www.bpb.de/mediathek/297/joachim-gauck-zu-1989-und-die-erinnerung-an-die-ddr',
+        'md5': '0792086e8e2bfbac9cdf27835d5f2093',
+        'info_dict': {
+            'id': '297',
+            'ext': 'mp4',
+            'title': 'Joachim Gauck zu 1989 und die Erinnerung an die DDR',
+            'description': 'Joachim Gauck, erster Beauftragter für die Stasi-Unterlagen, spricht auf dem Geschichtsforum über die friedliche Revolution 1989 und eine "gewisse Traurigkeit" im Umgang mit der DDR-Vergangenheit.'
+        }
+    }
+
+    def _real_extract(self, url):
+        video_id = self._match_id(url)
+        webpage = self._download_webpage(url, video_id)
+
+        title = self._html_search_regex(
+            r'<h2 class="white">(.*?)</h2>', webpage, 'title')
+        video_url = self._html_search_regex(
+            r'(http://film\.bpb\.de/player/dokument_[0-9]+\.mp4)',
+            webpage, 'video URL')
+
+        return {
+            'id': video_id,
+            'url': video_url,
+            'title': title,
+            'description': self._og_search_description(webpage),
+        }

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/br.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/br.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/br.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,143 @@
+# coding: utf-8
+from __future__ import unicode_literals
+
+from .common import InfoExtractor
+from ..utils import (
+    ExtractorError,
+    int_or_none,
+    parse_duration,
+)
+
+
+class BRIE(InfoExtractor):
+    IE_DESC = 'Bayerischer Rundfunk Mediathek'
+    _VALID_URL = r'https?://(?:www\.)?br\.de/(?:[a-z0-9\-_]+/)+(?P<id>[a-z0-9\-_]+)\.html'
+    _BASE_URL = 'http://www.br.de'
+
+    _TESTS = [
+        {
+            'url': 'http://www.br.de/mediathek/video/sendungen/abendschau/betriebliche-altersvorsorge-104.html',
+            'md5': '83a0477cf0b8451027eb566d88b51106',
+            'info_dict': {
+                'id': '48f656ef-287e-486f-be86-459122db22cc',
+                'ext': 'mp4',
+                'title': 'Die böse Überraschung',
+                'description': 'Betriebliche Altersvorsorge: Die böse Überraschung',
+                'duration': 180,
+                'uploader': 'Reinhard Weber',
+                'upload_date': '20150422',
+            }
+        },
+        {
+            'url': 'http://www.br.de/nachrichten/oberbayern/inhalt/muenchner-polizeipraesident-schreiber-gestorben-100.html',
+            'md5': 'a44396d73ab6a68a69a568fae10705bb',
+            'info_dict': {
+                'id': 'a4b83e34-123d-4b81-9f4e-c0d3121a4e05',
+                'ext': 'mp4',
+                'title': 'Manfred Schreiber ist tot',
+                'description': 'Abendschau kompakt: Manfred Schreiber ist tot',
+                'duration': 26,
+            }
+        },
+        {
+            'url': 'http://www.br.de/radio/br-klassik/sendungen/allegro/premiere-urauffuehrung-the-land-2015-dance-festival-muenchen-100.html',
+            'md5': '8b5b27c0b090f3b35eac4ab3f7a73d3d',
+            'info_dict': {
+                'id': '74c603c9-26d3-48bb-b85b-079aeed66e0b',
+                'ext': 'aac',
+                'title': 'Kurzweilig und sehr bewegend',
+                'description': '"The Land" von Peeping Tom: Kurzweilig und sehr bewegend',
+                'duration': 296,
+            }
+        },
+        {
+            'url': 'http://www.br.de/radio/bayern1/service/team/videos/team-video-erdelt100.html',
+            'md5': 'dbab0aef2e047060ea7a21fc1ce1078a',
+            'info_dict': {
+                'id': '6ba73750-d405-45d3-861d-1ce8c524e059',
+                'ext': 'mp4',
+                'title': 'Umweltbewusster Häuslebauer',
+                'description': 'Uwe Erdelt: Umweltbewusster Häuslebauer',
+                'duration': 116,
+            }
+        },
+        {
+            'url': 'http://www.br.de/fernsehen/br-alpha/sendungen/kant-fuer-anfaenger/kritik-der-reinen-vernunft/kant-kritik-01-metaphysik100.html',
+            'md5': '23bca295f1650d698f94fc570977dae3',
+            'info_dict': {
+                'id': 'd982c9ce-8648-4753-b358-98abb8aec43d',
+                'ext': 'mp4',
+                'title': 'Folge 1 - Metaphysik',
+                'description': 'Kant für Anfänger: Folge 1 - Metaphysik',
+                'duration': 893,
+                'uploader': 'Eva Maria Steimle',
+                'upload_date': '20140117',
+            }
+        },
+    ]
+
+    def _real_extract(self, url):
+        display_id = self._match_id(url)
+        page = self._download_webpage(url, display_id)
+        xml_url = self._search_regex(
+            r"return BRavFramework\.register\(BRavFramework\('avPlayer_(?:[a-f0-9-]{36})'\)\.setup\({dataURL:'(/(?:[a-z0-9\-]+/)+[a-z0-9/~_.-]+)'}\)\);", page, 'XMLURL')
+        xml = self._download_xml(self._BASE_URL + xml_url, None)
+
+        medias = []
+
+        for xml_media in xml.findall('video') + xml.findall('audio'):
+            media = {
+                'id': xml_media.get('externalId'),
+                'title': xml_media.find('title').text,
+                'duration': parse_duration(xml_media.find('duration').text),
+                'formats': self._extract_formats(xml_media.find('assets')),
+                'thumbnails': self._extract_thumbnails(xml_media.find('teaserImage/variants')),
+                'description': ' '.join(xml_media.find('shareTitle').text.splitlines()),
+                'webpage_url': xml_media.find('permalink').text
+            }
+            if xml_media.find('author').text:
+                media['uploader'] = xml_media.find('author').text
+            if xml_media.find('broadcastDate').text:
+                media['upload_date'] = ''.join(reversed(xml_media.find('broadcastDate').text.split('.')))
+            medias.append(media)
+
+        if len(medias) > 1:
+            self._downloader.report_warning(
+                'found multiple medias; please '
+                'report this with the video URL to http://yt-dl.org/bug')
+        if not medias:
+            raise ExtractorError('No media entries found')
+        return medias[0]
+
+    def _extract_formats(self, assets):
+
+        def text_or_none(asset, tag):
+            elem = asset.find(tag)
+            return None if elem is None else elem.text
+
+        formats = [{
+            'url': text_or_none(asset, 'downloadUrl'),
+            'ext': text_or_none(asset, 'mediaType'),
+            'format_id': asset.get('type'),
+            'width': int_or_none(text_or_none(asset, 'frameWidth')),
+            'height': int_or_none(text_or_none(asset, 'frameHeight')),
+            'tbr': int_or_none(text_or_none(asset, 'bitrateVideo')),
+            'abr': int_or_none(text_or_none(asset, 'bitrateAudio')),
+            'vcodec': text_or_none(asset, 'codecVideo'),
+            'acodec': text_or_none(asset, 'codecAudio'),
+            'container': text_or_none(asset, 'mediaType'),
+            'filesize': int_or_none(text_or_none(asset, 'size')),
+        } for asset in assets.findall('asset')
+            if asset.find('downloadUrl') is not None]
+
+        self._sort_formats(formats)
+        return formats
+
+    def _extract_thumbnails(self, variants):
+        thumbnails = [{
+            'url': self._BASE_URL + variant.find('url').text,
+            'width': int_or_none(variant.find('width').text),
+            'height': int_or_none(variant.find('height').text),
+        } for variant in variants.findall('variant')]
+        thumbnails.sort(key=lambda x: x['width'] * x['height'], reverse=True)
+        return thumbnails

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/breakcom.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/breakcom.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/breakcom.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,63 @@
+from __future__ import unicode_literals
+
+import re
+import json
+
+from .common import InfoExtractor
+from ..utils import (
+    int_or_none,
+    parse_age_limit,
+)
+
+
+class BreakIE(InfoExtractor):
+    _VALID_URL = r'http://(?:www\.)?break\.com/video/(?:[^/]+/)*.+-(?P<id>\d+)'
+    _TESTS = [{
+        'url': 'http://www.break.com/video/when-girls-act-like-guys-2468056',
+        'info_dict': {
+            'id': '2468056',
+            'ext': 'mp4',
+            'title': 'When Girls Act Like D-Bags',
+        }
+    }, {
+        'url': 'http://www.break.com/video/ugc/baby-flex-2773063',
+        'only_matching': True,
+    }]
+
+    def _real_extract(self, url):
+        video_id = self._match_id(url)
+        webpage = self._download_webpage(
+            'http://www.break.com/embed/%s' % video_id, video_id)
+        info = json.loads(self._search_regex(
+            r'var embedVars = ({.*})\s*?</script>',
+            webpage, 'info json', flags=re.DOTALL))
+
+        youtube_id = info.get('youtubeId')
+        if youtube_id:
+            return self.url_result(youtube_id, 'Youtube')
+
+        formats = [{
+            'url': media['uri'] + '?' + info['AuthToken'],
+            'tbr': media['bitRate'],
+            'width': media['width'],
+            'height': media['height'],
+        } for media in info['media'] if media.get('mediaPurpose') == 'play']
+
+        if not formats:
+            formats.append({
+                'url': info['videoUri']
+            })
+
+        self._sort_formats(formats)
+
+        duration = int_or_none(info.get('videoLengthInSeconds'))
+        age_limit = parse_age_limit(info.get('audienceRating'))
+
+        return {
+            'id': video_id,
+            'title': info['contentName'],
+            'thumbnail': info['thumbUri'],
+            'duration': duration,
+            'age_limit': age_limit,
+            'formats': formats,
+        }

=== added file 'openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/brightcove.py'
--- openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/brightcove.py	1970-01-01 00:00:00 +0000
+++ openlp/plugins/planningcenter/lib/youtube-dl/youtube_dl/extractor/brightcove.py	2015-09-23 12:21:33 +0000
@@ -0,0 +1,348 @@
+# encoding: utf-8
+from __future__ import unicode_literals
+
+import re
+import json
+import xml.etree.ElementTree
+
+from .common import InfoExtractor
+from ..compat import (
+    compat_parse_qs,
+    compat_str,
+    compat_urllib_parse,
+    compat_urllib_parse_urlparse,
+    compat_urllib_request,
+    compat_urlparse,
+    compat_xml_parse_error,
+)
+from ..utils import (
+    determine_ext,
+    ExtractorError,
+    find_xpath_attr,
+    fix_xml_ampersands,
+    unescapeHTML,
+    unsmuggle_url,
+)
+
+
+class BrightcoveIE(InfoExtractor):
+    _VALID_URL = r'(?:https?://.*brightcove\.com/(services|viewer).*?\?|brightcove:)(?P<query>.*)'
+    _FEDERATED_URL_TEMPLATE = 'http://c.brightcove.com/services/viewer/htmlFederated?%s'
+
+    _TESTS = [
+        {
+            # From http://www.8tv.cat/8aldia/videos/xavier-sala-i-martin-aquesta-tarda-a-8-al-dia/
+            'url': 'http://c.brightcove.com/services/viewer/htmlFederated?playerID=1654948606001&flashID=myExperience&%40videoPlayer=2371591881001',
+            'md5': '5423e113865d26e40624dce2e4b45d95',
+            'note': 'Test Brightcove downloads and detection in GenericIE',
+            'info_dict': {
+                'id': '2371591881001',
+                'ext': 'mp4',
+                'title': 'Xavier Sala i Martín: “Un banc que no presta és un banc zombi que no serveix per a res”',
+                'uploader': '8TV',
+                'description': 'md5:a950cc4285c43e44d763d036710cd9cd',
+            }
+        },
+        {
+            # From http://medianetwork.oracle.com/video/player/1785452137001
+            'url': 'http://c.brightcove.com/services/viewer/htmlFederated?playerID=1217746023001&flashID=myPlayer&%40videoPlayer=1785452137001',
+            'info_dict': {
+                'id': '1785452137001',
+                'ext': 'flv',
+                'title': 'JVMLS 2012: Arrays 2.0 - Opportunities and Challenges',
+                'description': 'John Rose speaks at the JVM Language Summit, August 1, 2012.',
+                'uploader': 'Oracle',
+            },
+        },
+        {
+            # From http://mashable.com/2013/10/26/thermoelectric-bracelet-lets-you-control-your-body-temperature/
+            'url': 'http://c.brightcove.com/services/viewer/federated_f9?&playerID=1265504713001&publisherID=AQ%7E%7E%2CAAABBzUwv1E%7E%2CxP-xFHVUstiMFlNYfvF4G9yFnNaqCw_9&videoID=2750934548001',
+            'info_dict': {
+                'id': '2750934548001',
+                'ext': 'mp4',
+                'title': 'This Bracelet Acts as a Personal Thermostat',
+                'description': 'md5:547b78c64f4112766ccf4e151c20b6a0',
+                'uploader': 'Mashable',
+            },
+        },
+        {
+            # test that the default referer works
+            # from http://national.ballet.ca/interact/video/Lost_in_Motion_II/
+            'url': 'http://link.brightcove.com/services/player/bcpid756015033001?bckey=AQ~~,AAAApYJi_Ck~,GxhXCegT1Dp39ilhXuxMJxasUhVNZiil&bctid=2878862109001',
+            'info_dict': {
+                'id': '2878862109001',
+                'ext': 'mp4',
+                'title': 'Lost in Motion II',
+                'description': 'md5:363109c02998fee92ec02211bd8000df',
+                'uploader': 'National Ballet of Canada',
+            },
+        },
+        {
+            # test flv videos served by akamaihd.net
+            # From http://www.redbull.com/en/bike/stories/1331655643987/replay-uci-dh-world-cup-2014-from-fort-william
+            'url': 'http://c.brightcove.com/services/viewer/htmlFederated?%40videoPlayer=ref%3ABC2996102916001&linkBaseURL=http%3A%2F%2Fwww.redbull.com%2Fen%2Fbike%2Fvideos%2F1331655630249%2Freplay-uci-fort-william-2014-dh&playerKey=AQ%7E%7E%2CAAAApYJ7UqE%7E%2Cxqr_zXk0I-zzNndy8NlHogrCb5QdyZRf&playerID=1398061561001#__youtubedl_smuggle=%7B%22Referer%22%3A+%22http%3A%2F%2Fwww.redbull.com%2Fen%2Fbike%2Fstories%2F1331655643987%2Freplay-uci-dh-world-cup-2014-from-fort-william%22%7D',
+            # The md5 checksum changes on each download
+            'info_dict': {
+                'id': '2996102916001',
+                'ext': 'flv',
+                'title': 'UCI MTB World Cup 2014: Fort William, UK - Downhill Finals',
+                'uploader': 'Red Bull TV',
+                'description': 'UCI MTB World Cup 2014: Fort William, UK - Downhill Finals',
+            },
+        },
+        {
+            # playlist test
+            # from http://support.brightcove.com/en/video-cloud/docs/playlist-support-single-video-players
+            'url': 'http://c.brightcove.com/services/viewer/htmlFederated?playerID=3550052898001&playerKey=AQ%7E%7E%2CAAABmA9XpXk%7E%2C-Kp7jNgisre1fG5OdqpAFUTcs0lP_ZoL',
+            'info_dict': {
+                'title': 'Sealife',
+                'id': '3550319591001',
+            },
+            'playlist_mincount': 7,
+        },
+    ]
+
+    @classmethod
+    def _build_brighcove_url(cls, object_str):
+        """
+        Build a Brightcove url from a xml string containing
+        <object class="BrightcoveExperience">{params}</object>
+        """
+
+        # Fix up some stupid HTML, see https://github.com/rg3/youtube-dl/issues/1553
+        object_str = re.sub(r'(<param(?:\s+[a-zA-Z0-9_]+="[^"]*")*)>',
+                            lambda m: m.group(1) + '/>', object_str)
+        # Fix up some stupid XML, see https://github.com/rg3/youtube-dl/issues/1608
+        object_str = object_str.replace('<--', '<!--')
+        # remove namespace to simplify extraction
+        object_str = re.sub(r'(<object[^>]*)(xmlns=".*?")', r'\1', object_str)
+        object_str = fix_xml_ampersands(object_str)
+
+        try:
+            object_doc = xml.etree.ElementTree.fromstring(object_str.encode('utf-8'))
+        except compat_xml_parse_error:
+            return
+
+        fv_el = find_xpath_attr(object_doc, './param', 'name', 'flashVars')
+        if fv_el is not None:
+            flashvars = dict(
+                (k, v[0])
+                for k, v in compat_parse_qs(fv_el.attrib['value']).items())
+        else:
+            flashvars = {}
+
+        def find_param(name):
+            if name in flashvars:
+                return flashvars[name]
+            node = find_xpath_attr(object_doc, './param', 'name', name)
+            if node is not None:
+                return node.attrib['value']
+            return None
+
+        params = {}
+
+        playerID = find_param('playerID')
+        if playerID is None:
+            raise ExtractorError('Cannot find player ID')
+        params['playerID'] = playerID
+
+        playerKey = find_param('playerKey')
+        # Not all pages define this value
+        if playerKey is not None:
+            params['playerKey'] = playerKey
+        # The three fields hold the id of the video
+        videoPlayer = find_param('@videoPlayer') or find_param('videoId') or find_param('videoID')
+        if videoPlayer is not None:
+            params['@videoPlayer'] = videoPlayer
+        linkBase = find_param('linkBaseURL')
+        if linkBase is not None:
+            params['linkBaseURL'] = linkBase
+        return cls._make_brightcove_url(params)
+
+    @classmethod
+    def _build_brighcove_url_from_js(cls, object_js):
+        # The layout of JS is as follows:
+        # customBC.createVideo = function (width, height, playerID, playerKey, videoPlayer, VideoRandomID) {
+        #   // build Brightcove <object /> XML
+        # }
+        m = re.search(
+            r'''(?x)customBC.\createVideo\(
+                .*?                                                  # skipping width and height
+                ["\'](?P<playerID>\d+)["\']\s*,\s*                   # playerID
+                ["\'](?P<playerKey>AQ[^"\']{48})[^"\']*["\']\s*,\s*  # playerKey begins with AQ and is 50 characters
+                                                                     # in length, however it's appended to itself
+                                                                     # in places, so truncate
+                ["\'](?P<videoID>\d+)["\']                           # @videoPlayer
+            ''', object_js)
+        if m:
+            return cls._make_brightcove_url(m.groupdict())
+
+    @classmethod
+    def _make_brightcove_url(cls, params):
+        data = compat_urllib_parse.urlencode(params)
+        return cls._FEDERATED_URL_TEMPLATE % data
+
+    @classmethod
+    def _extract_brightcove_url(cls, webpage):
+        """Try to extract the brightcove url from the webpage, returns None
+        if it can't be found
+        """
+        urls = cls._extract_brightcove_urls(webpage)
+        return urls[0] if urls else None
+
+    @classmethod
+    def _extract_brightcove_urls(cls, webpage):
+        """Return a list of all Brightcove URLs from the webpage """
+
+        url_m = re.search(
+            r'<meta\s+property=[\'"]og:video[\'"]\s+content=[\'"](https?://(?:secure|c)\.brightcove.com/[^\'"]+)[\'"]',
+            webpage)
+        if url_m:
+            url = unescapeHTML(url_m.group(1))
+            # Some sites don't add it, we can't download with this url, for example:
+            # http://www.ktvu.com/videos/news/raw-video-caltrain-releases-video-of-man-almost/vCTZdY/
+            if 'playerKey' in url or 'videoId' in url:
+                return [url]
+
+        matches = re.findall(
+            r'''(?sx)<object
+            (?:
+                [^>]+?class=[\'"][^>]*?BrightcoveExperience.*?[\'"] |
+                [^>]*?>\s*<param\s+name="movie"\s+value="https?://[^/]*brightcove\.com/
+            ).+?>\s*</object>''',
+            webpage)
+        if matches:
+            return list(filter(None, [cls._build_brighcove_url(m) for m in matches]))
+
+        return list(filter(None, [
+            cls._build_brighcove_url_from_js(custom_bc)
+            for custom_bc in re.findall(r'(customBC\.createVideo\(.+?\);)', webpage)]))
+
+    def _real_extract(self, url):
+        url, smuggled_data = unsmuggle_url(url, {})
+
+        # Change the 'videoId' and others field to '@videoPlayer'
+        url = re.sub(r'(?<=[?&])(videoI(d|D)|bctid)', '%40videoPlayer', url)
+        # Change bckey (used by bcove.me urls) to playerKey
+        url = re.sub(r'(?<=[?&])bckey', 'playerKey', url)
+        mobj = re.match(self._VALID_URL, url)
+        query_str = mobj.group('query')
+        query = compat_urlparse.parse_qs(query_str)
+
+        videoPlayer = query.get('@videoPlayer')
+        if videoPlayer:
+            # We set the original url as the default 'Referer' header
+            referer = smuggled_data.get('Referer', url)
+            return self._get_video_info(
+                videoPlayer[0], query_str, query, referer=referer)
+        elif 'playerKey' in query:
+            player_key = query['playerKey']
+            return self._get_playlist_info(player_key[0])
+        else:
+            raise ExtractorError(
+                'Cannot find playerKey= variable. Did you forget quotes in a shell invocation?',
+                expected=True)
+
+    def _get_video_info(self, video_id, query_str, query, referer=None):
+        request_url = self._FEDERATED_URL_TEMPLATE % query_str
+        req = compat_urllib_request.Request(request_url)
+        linkBase = query.get('linkBaseURL')
+        if linkBase is not None:
+            referer = linkBase[0]
+        if referer is not None:
+            req.add_header('Referer', referer)
+        webpage = self._download_webpage(req, video_id)
+
+        error_msg = self._html_search_regex(
+            r"<h1>We're sorry.</h1>([\s\n]*<p>.*?</p>)+", webpage,
+            'error message', default=None)
+        if error_msg is not None:
+            raise ExtractorError(
+                'brightcove said: %s' % error_msg, expected=True)
+
+        self.report_extraction(video_id)
+        info = self._search_regex(r'var experienceJSON = ({.*});', webpage, 'json')
+        info = json.loads(info)['data']
+        video_info = info['programmedContent']['videoPlayer']['mediaDTO']
+        video_info['_youtubedl_adServerURL'] = info.get('adServerURL')
+
+        return self._extract_video_info(video_info)
+
+    def _get_playlist_info(self, player_key):
+        info_url = 'http://c.brightcove.com/services/json/experience/runtime/?command=get_programming_for_experience&playerKey=%s' % player_key
+        playlist_info = self._download_webpage(
+            info_url, player_key, 'Downloading playlist information')
+
+        json_data = json.loads(playlist_info)
+        if 'videoList' not in json_data:
+            raise ExtractorError('Empty playlist')
+        playlist_info = json_data['videoList']
+        videos = [self._extract_video_info(video_info) for video_info in playlist_info['mediaCollectionDTO']['videoDTOs']]
+
+        return self.playlist_result(videos, playlist_id='%s' % playlist_info['id'],
+                                    playlist_title=playlist_info['mediaCollectionDTO']['displayName'])
+
+    def _extract_video_info(self, video_info):
+        info = {
+            'id': compat_str(video_info['id']),
+            'title': video_info['displayName'].strip(),
+            'description': video_info.get('shortDescription'),
+            'thumbnail': video_info.get('videoStillURL') or video_info.get('thumbnailURL'),
+            'uploader': video_info.get('publisherName'),
+        }
+
+        renditions = video_info.get('renditions')
+        if renditions:
+            formats = []
+            for rend in renditions:
+                url = rend['defaultURL']
+                if not url:
+                    continue
+                ext = None
+                if

Follow ups