← Back to team overview

launchpad-reviewers team mailing list archive

[Merge] ~cjwatson/launchpad:remove-feedvalidator into launchpad:master

 

Colin Watson has proposed merging ~cjwatson/launchpad:remove-feedvalidator into launchpad:master.

Commit message:
Remove feedvalidator

Requested reviews:
  Launchpad code reviewers (launchpad-reviewers)

For more details, see:
https://code.launchpad.net/~cjwatson/launchpad/+git/launchpad/+merge/395696

It's dead upstream and doesn't work on Python 3.  feedparser does a good enough job of ensuring that feeds are syntactically valid.
-- 
Your team Launchpad code reviewers is requested to review the proposed merge of ~cjwatson/launchpad:remove-feedvalidator into launchpad:master.
diff --git a/doc/pip.txt b/doc/pip.txt
index 6d9ead1..06782c4 100644
--- a/doc/pip.txt
+++ b/doc/pip.txt
@@ -333,10 +333,8 @@ This will create a script named ``run`` in the ``bin`` directory that calls the
 Work with Unreleased or Forked Packages
 =======================================
 
-Sometimes you need to work with unreleased or forked packages.  FeedValidator_,
-for instance, makes nightly zip releases but other than that only offers svn
-access.  Similarly, we may require a patched or unreleased version of a package
-for some purpose.  Hopefully, these situations will be rare, but they do occur.
+Sometimes you need to work with unreleased or forked packages.  Hopefully,
+these situations will be rare, but they do occur.
 
 At the moment, our solution is to use the download-cache.  Basically, make a
 custom source distribution with a unique suffix in the name, and use it (and
@@ -349,8 +347,6 @@ forked package, you should use ``lp`` as a local version identifier.  For
 example, you might start by appending ``+lp1``, followed by ``+lp2`` and so
 on for further revisions.
 
-.. _FeedValidator: http://feedvalidator.org/
-
 .. _`PEP 440`: https://www.python.org/dev/peps/pep-0440/
 
 Developing a Dependent Library In Parallel
diff --git a/lib/lp/bugs/stories/feeds/xx-bug-atom.txt b/lib/lp/bugs/stories/feeds/xx-bug-atom.txt
index 0620bef..f22b361 100644
--- a/lib/lp/bugs/stories/feeds/xx-bug-atom.txt
+++ b/lib/lp/bugs/stories/feeds/xx-bug-atom.txt
@@ -3,12 +3,13 @@
 Atom feeds produce XML not HTML.  Therefore we must parse the output as XML
 by asking BeautifulSoup to use lxml.
 
+    >>> import feedparser
     >>> from lp.services.beautifulsoup import (
     ...     BeautifulSoup,
     ...     SoupStrainer,
     ...     )
     >>> from lp.services.feeds.tests.helper import (
-    ...     parse_entries, parse_links, validate_feed)
+    ...     parse_entries, parse_links)
 
 Please note that when displaying the results of the feeds in a reader
 the order of entries may not be the same as generated.  Some readers
@@ -25,9 +26,7 @@ the bugs page for the product, while the alternate link for entries
 point to the bugs themselves.
 
     >>> browser.open('http://feeds.launchpad.test/jokosher/latest-bugs.atom')
-    >>> validate_feed(browser.contents,
-    ...              browser.headers['content-type'], browser.url)
-    No Errors
+    >>> _ = feedparser.parse(browser.contents)
     >>> BeautifulSoup(browser.contents, 'xml').title.contents
     [u'Bugs in Jokosher']
     >>> browser.url
@@ -83,9 +82,7 @@ This feed gets the latest bugs for a project, and has the same type of content
 as the latest bugs feed for a product.
 
     >>> browser.open('http://feeds.launchpad.test/mozilla/latest-bugs.atom')
-    >>> validate_feed(browser.contents,
-    ...              browser.headers['content-type'], browser.url)
-    No Errors
+    >>> _ = feedparser.parse(browser.contents)
     >>> BeautifulSoup(browser.contents, 'xml').title.contents
     [u'Bugs in The Mozilla Project']
     >>> browser.url
@@ -145,9 +142,7 @@ This feed gets the latest bugs for a distribution, and has the same type
 of content as the latest bugs feed for a product.
 
     >>> browser.open('http://feeds.launchpad.test/ubuntu/latest-bugs.atom')
-    >>> validate_feed(browser.contents,
-    ...              browser.headers['content-type'], browser.url)
-    No Errors
+    >>> _ = feedparser.parse(browser.contents)
     >>> BeautifulSoup(browser.contents, 'xml').title.contents
     [u'Bugs in Ubuntu']
     >>> browser.url
@@ -203,9 +198,7 @@ Create a private team and assign an ubuntu distro bug to that team.
 Get the ubuntu/latest-bugs feed.
 
     >>> browser.open('http://feeds.launchpad.test/ubuntu/latest-bugs.atom')
-    >>> validate_feed(browser.contents,
-    ...              browser.headers['content-type'], browser.url)
-    No Errors
+    >>> _ = feedparser.parse(browser.contents)
 
     >>> entries = parse_entries(browser.contents)
     >>> print(len(entries))
@@ -231,9 +224,7 @@ type of content as the latest bugs feed for a product.
     >>> browser.open(
     ...     'http://feeds.launchpad.test/ubuntu/+source/thunderbird'
     ...     '/latest-bugs.atom')
-    >>> validate_feed(browser.contents,
-    ...              browser.headers['content-type'], browser.url)
-    No Errors
+    >>> _ = feedparser.parse(browser.contents)
     >>> BeautifulSoup(browser.contents, 'xml').title.contents
     [u'Bugs in thunderbird in Ubuntu']
     >>> browser.url
@@ -264,9 +255,7 @@ type of content as the latest bugs feed for a product.
 
     >>> browser.open(
     ...     'http://feeds.launchpad.test/ubuntu/hoary/latest-bugs.atom')
-    >>> validate_feed(browser.contents,
-    ...              browser.headers['content-type'], browser.url)
-    No Errors
+    >>> _ = feedparser.parse(browser.contents)
     >>> BeautifulSoup(browser.contents, 'xml').title.contents
     [u'Bugs in Hoary']
     >>> browser.url
@@ -305,9 +294,7 @@ type of content as the latest bugs feed for a product.
 
     >>> browser.open(
     ...     'http://feeds.launchpad.test/firefox/1.0/latest-bugs.atom')
-    >>> validate_feed(browser.contents,
-    ...              browser.headers['content-type'], browser.url)
-    No Errors
+    >>> _ = feedparser.parse(browser.contents)
     >>> BeautifulSoup(browser.contents, 'xml').title.contents
     [u'Bugs in 1.0']
     >>> browser.url
@@ -344,9 +331,7 @@ type of content as the latest bugs feed for a product.
 This feed gets the latest bugs for a person.
 
     >>> browser.open('http://feeds.launchpad.test/~name16/latest-bugs.atom')
-    >>> validate_feed(browser.contents,
-    ...              browser.headers['content-type'], browser.url)
-    No Errors
+    >>> _ = feedparser.parse(browser.contents)
     >>> BeautifulSoup(browser.contents, 'xml').title.contents
     [u'Bugs for Foo Bar']
     >>> browser.url
@@ -420,9 +405,7 @@ some results.
 
     >>> browser.open(
     ...    'http://feeds.launchpad.test/~simple-team/latest-bugs.atom')
-    >>> validate_feed(browser.contents,
-    ...              browser.headers['content-type'], browser.url)
-    No Errors
+    >>> _ = feedparser.parse(browser.contents)
     >>> BeautifulSoup(browser.contents, 'xml').title.contents
     [u'Bugs for Simple Team']
 
@@ -449,9 +432,7 @@ some results.
 This feed gets the latest bugs reported against any target.
 
     >>> browser.open('http://feeds.launchpad.test/bugs/latest-bugs.atom')
-    >>> validate_feed(browser.contents,
-    ...               browser.headers['content-type'], browser.url)
-    No Errors
+    >>> _ = feedparser.parse(browser.contents)
     >>> BeautifulSoup(browser.contents, 'xml').title.contents
     [u'Launchpad bugs']
     >>> browser.url
@@ -560,9 +541,7 @@ to True.
 This feed shows the status of a single bug.
 
     >>> browser.open('http://feeds.launchpad.test/bugs/1/bug.atom')
-    >>> validate_feed(browser.contents,
-    ...              browser.headers['content-type'], browser.url)
-    No Errors
+    >>> _ = feedparser.parse(browser.contents)
     >>> BeautifulSoup(browser.contents, 'xml').title.contents
     [u'Bug 1']
     >>> entries = parse_entries(browser.contents)
diff --git a/lib/lp/code/stories/feeds/xx-branch-atom.txt b/lib/lp/code/stories/feeds/xx-branch-atom.txt
index c8e3628..bf4b6bc 100644
--- a/lib/lp/code/stories/feeds/xx-branch-atom.txt
+++ b/lib/lp/code/stories/feeds/xx-branch-atom.txt
@@ -3,12 +3,13 @@
 Atom feeds produce XML not HTML.  Therefore we must parse the output as XML
 by asking BeautifulSoup to use lxml.
 
+    >>> import feedparser
     >>> from lp.services.beautifulsoup import (
     ...     BeautifulSoup,
     ...     SoupStrainer,
     ...     )
     >>> from lp.services.feeds.tests.helper import (
-    ...     parse_ids, parse_links, validate_feed)
+    ...     parse_ids, parse_links)
 
 == Create some specific branches to use for this test ==
 
@@ -46,11 +47,7 @@ The feed for a person's branches will show the most recent 25 branches
 which will include an entry for each branch.
 
     >>> anon_browser.open('http://feeds.launchpad.test/~mike/branches.atom')
-    >>> def validate_browser_feed(browser):
-    ...     validate_feed(
-    ...         browser.contents, browser.headers['content-type'], browser.url)
-    >>> validate_browser_feed(anon_browser)
-    No Errors
+    >>> _ = feedparser.parse(anon_browser.contents)
     >>> BeautifulSoup(anon_browser.contents, 'xml').title.contents
     [u'Branches for Mike Murphy']
     >>> def print_parse_ids(browser):
@@ -91,8 +88,7 @@ If an anonymous user fetches the same feed the email addresses will
 still be hidden:
 
     >>> anon_browser.open('http://feeds.launchpad.test/~name12/branches.atom')
-    >>> validate_browser_feed(anon_browser)
-    No Errors
+    >>> _ = feedparser.parse(anon_browser.contents)
     >>> BeautifulSoup(anon_browser.contents, 'xml').title.contents
     [u'Branches for Sample Person']
     >>> 'foo@localhost' in anon_browser.contents
@@ -126,8 +122,7 @@ If we look at the feed for landscape developers there will be no
 branches listed, just an id for the feed.
 
     >>> browser.open('http://feeds.launchpad.test/~landscape-developers/branches.atom')
-    >>> validate_browser_feed(browser)
-    No Errors
+    >>> _ = feedparser.parse(browser.contents)
     >>> BeautifulSoup(browser.contents, 'xml').title.contents
     [u'Branches for Landscape Developers']
     >>> print_parse_ids(browser)
@@ -140,8 +135,7 @@ The feed for a product's branches will show the most recent 25 branches
 which will include an entry for each branch.
 
     >>> anon_browser.open('http://feeds.launchpad.test/fooix/branches.atom')
-    >>> validate_browser_feed(anon_browser)
-    No Errors
+    >>> _ = feedparser.parse(anon_browser.contents)
     >>> BeautifulSoup(anon_browser.contents, 'xml').title.contents
     [u'Branches for Fooix']
     >>> print_parse_ids(anon_browser)
@@ -172,8 +166,7 @@ The feed for a project group's branches will show the most recent 25
 branches which will include an entry for each branch.
 
     >>> anon_browser.open('http://feeds.launchpad.test/oh-man/branches.atom')
-    >>> validate_browser_feed(anon_browser)
-    No Errors
+    >>> _ = feedparser.parse(anon_browser.contents)
     >>> BeautifulSoup(anon_browser.contents, 'xml').title.contents
     [u'Branches for Oh Man']
     >>> print_parse_ids(anon_browser)
@@ -208,9 +201,7 @@ different entry.
 
     >>> url = 'http://feeds.launchpad.test/~mark/firefox/release--0.9.1/branch.atom'
     >>> browser.open(url)
-    >>> validate_feed(browser.contents,
-    ...              browser.headers['content-type'], browser.url)
-    No Errors
+    >>> _ = feedparser.parse(browser.contents)
     >>> BeautifulSoup(browser.contents, 'xml').title.contents
     [u'Latest Revisions for Branch lp://dev/~mark/firefox/release--0.9.1']
     >>> print(browser.url)
diff --git a/lib/lp/code/stories/feeds/xx-revision-atom.txt b/lib/lp/code/stories/feeds/xx-revision-atom.txt
index 3079a35..90f281c 100644
--- a/lib/lp/code/stories/feeds/xx-revision-atom.txt
+++ b/lib/lp/code/stories/feeds/xx-revision-atom.txt
@@ -3,9 +3,10 @@
 Atom feeds produce XML not HTML.  Therefore we must parse the output as XML
 by asking BeautifulSoup to use lxml.
 
+    >>> import feedparser
     >>> from lp.services.beautifulsoup import BeautifulSoup
     >>> from lp.services.feeds.tests.helper import (
-    ...     parse_ids, parse_links, validate_feed)
+    ...     parse_ids, parse_links)
 
 == Create some specific branches to use for this test ==
 
@@ -70,11 +71,7 @@ The feed for a person's revisions will show the most recent 25 revisions
 that have been committed by that person (or attributed to that person).
 
     >>> anon_browser.open('http://feeds.launchpad.test/~mike/revisions.atom')
-    >>> def validate_browser_feed(browser):
-    ...     validate_feed(
-    ...         browser.contents, browser.headers['content-type'], browser.url)
-    >>> validate_browser_feed(anon_browser)
-    No Errors
+    >>> _ = feedparser.parse(anon_browser.contents)
     >>> BeautifulSoup(anon_browser.contents, 'xml').title.contents
     [u'Latest Revisions by Mike Murphy']
     >>> def print_parse_ids(browser):
@@ -102,8 +99,7 @@ If we look at the feed for a team, we get revisions created by any member
 of that team.
 
     >>> browser.open('http://feeds.launchpad.test/~m-team/revisions.atom')
-    >>> validate_browser_feed(browser)
-    No Errors
+    >>> _ = feedparser.parse(browser.contents)
     >>> BeautifulSoup(browser.contents, 'xml').title.contents
     [u'Latest Revisions by members of The M Team']
     >>> print_parse_ids(browser)
@@ -120,8 +116,7 @@ The feed for a product's revisions will show the most recent 25 revisions
 that have been committed on branches for the product.
 
     >>> anon_browser.open('http://feeds.launchpad.test/fooix/revisions.atom')
-    >>> validate_browser_feed(anon_browser)
-    No Errors
+    >>> _ = feedparser.parse(anon_browser.contents)
     >>> BeautifulSoup(anon_browser.contents, 'xml').title.contents
     [u'Latest Revisions for Fooix']
 
@@ -145,8 +140,7 @@ A feed for a project group will show the most recent 25 revisions across any
 branch for any product that is associated with the project group.
 
     >>> anon_browser.open('http://feeds.launchpad.test/fubar/revisions.atom')
-    >>> validate_browser_feed(anon_browser)
-    No Errors
+    >>> _ = feedparser.parse(anon_browser.contents)
     >>> BeautifulSoup(anon_browser.contents, 'xml').title.contents
     [u'Latest Revisions for Fubar']
 
diff --git a/lib/lp/registry/stories/announcements/xx-announcements.txt b/lib/lp/registry/stories/announcements/xx-announcements.txt
index 9a12076..610bfaf 100644
--- a/lib/lp/registry/stories/announcements/xx-announcements.txt
+++ b/lib/lp/registry/stories/announcements/xx-announcements.txt
@@ -6,12 +6,13 @@ list is available as a portlet on the project home page, as well as on a
 dedicated batched page showing all announcements, and as an RSS/Atom
 news feed.
 
+    >>> import feedparser
     >>> from lp.services.beautifulsoup import (
     ...     BeautifulSoup,
     ...     SoupStrainer,
     ...     )
     >>> from lp.services.feeds.tests.helper import (
-    ...     parse_ids, parse_links, validate_feed)
+    ...     parse_ids, parse_links)
 
     >>> NOANNOUNCE = 'no announcements for this project'
     >>> def latest_news(content):
@@ -629,10 +630,7 @@ The feeds are published even when there are no announcements.
 
     >>> nopriv_browser.open(
     ...     'http://feeds.launchpad.test/netapplet/announcements.atom')
-    >>> validate_feed(nopriv_browser.contents,
-    ...              nopriv_browser.headers['content-type'],
-    ...              nopriv_browser.url)
-    No Errors
+    >>> _ = feedparser.parse(nopriv_browser.contents)
     >>> 'NetApplet Announcements' in nopriv_browser.contents
     True
 
@@ -654,10 +652,7 @@ announcement, which is due in the future, does not show up:
 
     >>> nopriv_browser.open(
     ...     'http://feeds.launchpad.test/jokosher/announcements.atom')
-    >>> validate_feed(nopriv_browser.contents,
-    ...              nopriv_browser.headers['content-type'],
-    ...              nopriv_browser.url)
-    No Errors
+    >>> _ = feedparser.parse(nopriv_browser.contents)
     >>> 'Jokosher announcement headline' in nopriv_browser.contents
     False
 
@@ -665,10 +660,7 @@ Retracted items do not show up either.
 
     >>> nopriv_browser.open(
     ...     'http://feeds.launchpad.test/guadalinex/announcements.atom')
-    >>> validate_feed(nopriv_browser.contents,
-    ...              nopriv_browser.headers['content-type'],
-    ...              nopriv_browser.url)
-    No Errors
+    >>> _ = feedparser.parse(nopriv_browser.contents)
     >>> 'Kubuntu announcement headline' in nopriv_browser.contents
     True
     >>> for id_ in parse_ids(nopriv_browser.contents):
@@ -695,10 +687,7 @@ products.
 
     >>> nopriv_browser.open(
     ...     'http://feeds.launchpad.test/apache/announcements.atom')
-    >>> validate_feed(nopriv_browser.contents,
-    ...              nopriv_browser.headers['content-type'],
-    ...              nopriv_browser.url)
-    No Errors
+    >>> _ = feedparser.parse(nopriv_browser.contents)
     >>> 'Tomcat announcement headline' in nopriv_browser.contents
     True
     >>> 'Modified headline' in nopriv_browser.contents # apache itself
@@ -723,10 +712,7 @@ hosted in Launchpad:
 
     >>> nopriv_browser.open(
     ...     'http://feeds.launchpad.test/announcements.atom')
-    >>> validate_feed(nopriv_browser.contents,
-    ...              nopriv_browser.headers['content-type'],
-    ...              nopriv_browser.url)
-    No Errors
+    >>> _ = feedparser.parse(nopriv_browser.contents)
     >>> 'Announcements published via Launchpad' in (
     ...     six.ensure_text(nopriv_browser.contents))
     True
@@ -753,10 +739,7 @@ let us use a DTD to define the html entities that standard xml is missing.
 
     >>> nopriv_browser.open(
     ...     'http://feeds.launchpad.test/ubuntu/announcements.atom')
-    >>> validate_feed(nopriv_browser.contents,
-    ...              nopriv_browser.headers['content-type'],
-    ...              nopriv_browser.url)
-    No Errors
+    >>> _ = feedparser.parse(nopriv_browser.contents)
     >>> soup = BeautifulSoup(nopriv_browser.contents)
     >>> soup.find('feed').entry.title
     <...>Ampersand="&amp;" LessThan="&lt;" GreaterThan="&gt;"</title>
diff --git a/lib/lp/services/feeds/doc/feeds.txt b/lib/lp/services/feeds/doc/feeds.txt
index c0c1178..f315bb3 100644
--- a/lib/lp/services/feeds/doc/feeds.txt
+++ b/lib/lp/services/feeds/doc/feeds.txt
@@ -160,97 +160,3 @@ we are testing xhtml encoding here in case we need it in the future.
     ...               content_type="xhtml")
     >>> xhtml.content
     u'<b> and \xa0 and &amp;</b><hr/>'
-
-
-== validate_feed() helper function ==
-
-Pagetests can use the validate_feed() function to perform all the
-checks available at http://feedvalidator.org
-
-Occasionally, there will be suggestions from feedvalidator that we will
-ignore. For example, when the <title type="text"> element contains html,
-we want to view the html as opposed to having the title text formatted.
-Therefore, we won't set type="html" for the title element.
-
-    >>> feed = """<?xml version="1.0" encoding="UTF-8"?>
-    ... <feed xmlns="http://www.w3.org/2005/Atom"; xml:lang="en">
-    ... <id>one</id>
-    ... <title>Bugs in Launchpad itself</title>
-    ... <link rel="self" href="http://foo.bar"; />
-    ... <updated>2006-05-19T06:37:40.344941+00:00</updated>
-    ... <entry>
-    ...     <id>http://launchpad.test/entry</id>
-    ...     <author></author>
-    ...     <updated>2007-05-19T06:37:40.344941+00:00</updated>
-    ...     <title type="text">&lt;b&gt;hello&lt;/b&gt;</title>
-    ...     <content type="html">&lt;b&gt;hello&lt;/b&gt;</content>
-    ... </entry>
-    ... </feed>"""
-    >>> from lp.services.feeds.tests.helper import validate_feed
-    >>> validate_feed(feed, '/atom+xml', 'http://ubuntu.com')
-    -------- Error: InvalidFullLink --------
-    Backupcolumn: 7
-    Backupline: 3
-    Column: 7
-    Element: id
-    Line: 3
-    Parent: feed
-    Value: one
-    =
-      1: <?xml version="1.0" encoding="UTF-8"?>
-      2: <feed xmlns="http://www.w3.org/2005/Atom"; xml:lang="en">
-      3: <id>one</id>
-       : ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-      4: <title>Bugs in Launchpad itself</title>
-      5: <link rel="self" href="http://foo.bar"; />
-    =
-    -------- Error: MissingElement --------
-    Backupcolumn: 12
-    Backupline: 9
-    Column: 12
-    Element: name
-    Line: 9
-    Parent: author
-    =
-      7: <entry>
-      8:     <id>http://launchpad.test/entry</id>
-      9:     <author></author>
-       : ~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-     10:     <updated>2007-05-19T06:37:40.344941+00:00</updated>
-     11:     <title type="text">&lt;b&gt;hello&lt;/b&gt;</title>
-    =
-    -------- Warning: UnexpectedContentType --------
-    Contenttype: /atom+xml
-    Type: Feeds
-    -------- Warning: SelfDoesntMatchLocation --------
-    Backupcolumn: 41
-    Backupline: 5
-    Column: 41
-    Element: href
-    Line: 5
-    Parent: feed
-    Location: http://ubuntu.com
-    =
-      3: <id>one</id>
-      4: <title>Bugs in Launchpad itself</title>
-      5: <link rel="self" href="http://foo.bar"; />
-       : ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~
-      6: <updated>2006-05-19T06:37:40.344941+00:00</updated>
-      7: <entry>
-    =
-    -------- Warning: ContainsUndeclaredHTML --------
-    Backupcolumn: 47
-    Backupline: 11
-    Column: 47
-    Element: title
-    Line: 11
-    Parent: entry
-    Value: b
-    =
-      9:     <author></author>
-     10:     <updated>2007-05-19T06:37:40.344941+00:00</updated>
-     11:     <title type="text">&lt;b&gt;hello&lt;/b&gt;</title>
-       : ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~
-     12:     <content type="html">&lt;b&gt;hello&lt;/b&gt;</content>
-     13: </entry>
-    =
diff --git a/lib/lp/services/feeds/tests/helper.py b/lib/lp/services/feeds/tests/helper.py
index af37ade..ffa84a4 100644
--- a/lib/lp/services/feeds/tests/helper.py
+++ b/lib/lp/services/feeds/tests/helper.py
@@ -11,22 +11,8 @@ __all__ = [
     'parse_links',
     'Thing',
     'ThingFeedView',
-    'validate_feed',
     ]
 
-
-import socket
-
-
-original_timeout = socket.getdefaulttimeout()
-import feedvalidator
-if socket.getdefaulttimeout() != original_timeout:
-    # feedvalidator's __init__ uses setdefaulttimeout promiscuously
-    socket.setdefaulttimeout(original_timeout)
-from cStringIO import StringIO
-from textwrap import wrap
-
-import six
 from zope.interface import (
     Attribute,
     implementer,
@@ -83,67 +69,3 @@ def parse_ids(contents):
     strainer = SoupStrainer('id')
     ids = [tag for tag in BeautifulSoup(contents, 'xml', parse_only=strainer)]
     return ids
-
-
-def validate_feed(content, content_type, base_uri):
-    """Validate the content of an Atom, RSS, or KML feed.
-    :param content: string containing xml feed
-    :param content_type: Content-Type HTTP header
-    :param base_uri: Feed URI for comparison with <link rel="self">
-
-    Prints formatted list of warnings and errors for use in doctests.
-    No return value.
-    """
-    lines = content.split('\n')
-    result = feedvalidator.validateStream(
-        StringIO(content),
-        contentType=content_type,
-        base=base_uri)
-
-    errors = []
-    for error_level in (feedvalidator.logging.Error,
-                        feedvalidator.logging.Warning,
-                        feedvalidator.logging.Info):
-        for item in result['loggedEvents']:
-            if isinstance(item, error_level):
-                errors.append("-------- %s: %s --------"
-                    % (error_level.__name__, item.__class__.__name__))
-                for key, value in sorted(item.params.items()):
-                    errors.append('%s: %s' % (key.title(), value))
-                if 'line' not in item.params:
-                    continue
-                if isinstance(item,
-                              feedvalidator.logging.SelfDoesntMatchLocation):
-                    errors.append('Location: %s' % base_uri)
-                error_line_number = item.params['line']
-                column_number = item.params['column']
-                errors.append('=')
-                # Wrap the line with the error to make it clearer
-                # which column contains the error.
-                max_line_length = 66
-                wrapped_column_number = column_number % max_line_length
-                line_number_range = list(range(
-                    max(error_line_number - 2, 1),
-                    min(error_line_number + 3, len(lines))))
-                for line_number in line_number_range:
-                    unicode_line = six.ensure_text(
-                        lines[line_number - 1], 'ascii', 'replace')
-                    ascii_line = unicode_line.encode('ascii', 'replace')
-                    wrapped_lines = wrap(ascii_line, max_line_length)
-                    if line_number == error_line_number:
-                        # Point to the column where the error occurs, e.g.
-                        # Error: <feed><entriez>
-                        # Point: ~~~~~~~~~~~~~^~~~~~~~~~~~~~~
-                        point_list = ['~'] * max_line_length
-                        point_list[wrapped_column_number] = '^'
-                        point_string = ''.join(point_list)
-                        index = column_number / max_line_length + 1
-                        wrapped_lines.insert(index, point_string)
-                    errors.append(
-                        "% 3d: %s" % (line_number,
-                                      '\n   : '.join(wrapped_lines)))
-                errors.append('=')
-    if len(errors) == 0:
-        print "No Errors"
-    else:
-        print '\n'.join(errors)
diff --git a/requirements/launchpad.txt b/requirements/launchpad.txt
index 045e778..4971c04 100644
--- a/requirements/launchpad.txt
+++ b/requirements/launchpad.txt
@@ -44,7 +44,6 @@ elementtree==1.2.6-20050316
 enum34==1.1.6
 fastimport==0.9.8
 feedparser==5.2.1
-feedvalidator==0.0.0DEV-r1049
 FormEncode==1.3.1
 futures==3.3.0
 geoip2==2.9.0
diff --git a/setup.py b/setup.py
index 173e1c9..de4fafa 100644
--- a/setup.py
+++ b/setup.py
@@ -168,7 +168,6 @@ setup(
         'dkimpy[ed25519]',
         'dulwich',
         'feedparser',
-        'feedvalidator',
         'fixtures',
         # Required for gunicorn[gthread].  We depend on it explicitly
         # because gunicorn declares its dependency in a way that produces