ubuntu-phone team mailing list archive
-
ubuntu-phone team
-
Mailing list archive
-
Message #16139
Re: The problem with "no background processing for apps"
-
To:
ubuntu-phone <ubuntu-phone@xxxxxxxxxxxxxxxxxxx>
-
From:
Matthew Paul Thomas <mpt@xxxxxxxxxxxxx>
-
Date:
Fri, 9 Oct 2015 15:06:01 +0100
-
In-reply-to:
<CAKyXnk-vBV9Ga-ObAL2=hvqTLvudMWsk=Omb_SfY3kCvV2Y3JQ@mail.gmail.com>
-
Organization:
Canonical Ltd
-
User-agent:
Mozilla/5.0 (X11; Linux i686; rv:38.0) Gecko/20100101 Thunderbird/38.2.0
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Olivier Tilloy wrote on 02/10/15 11:55:
>
> On Fri, Oct 2, 2015 at 12:10 PM, Matthew Paul Thomas
> <mpt@xxxxxxxxxxxxx> wrote:
>
> ...
>> A better solution, perhaps, would be to expand the Download
>> Manager service.
>> <https://developer.ubuntu.com/api/apps/qml/sdk-15.04/Ubuntu.DownloadManager.index/>
>>
>>
>>
First, make it accept more than just an URL. You can encode HTTP
>> authentication into an URL, but you can't provide cookies, client
>> certificates, a POST form payload, or even an HTTP Referer.
>>
>> Second, instead of making it an opt-in API for apps to use for "a
>> long running connection", just make it automatic for every
>> transfer. There's little point in halting a transfer because an
>> app is going into the background; the app will usually just
>> restart it again later, using more battery (and more data) in
>> the long run.
>>
>> If those two things were done, the Web browser could use it for
>> almost everything. So as long as the HTML of a page had been
>> loaded, with the other resources requested, the rest of the page
>> would load in the background.
>
> While that might sound like a solution to the problem on the paper,
> it won’t fly, for several reasons:
>
> - afaik this is not part of the design goals of udm, so it might
> require quite some refactoring (or even a complete rewrite) to
> catter for this new use case
I don't understand why udm is so limited by design, when the things I
listed would be needed for the download service to handle even just
browser downloads reliably. If that wasn't use case number 1, what was?
Even if we wanted to, the browser can't easily limit its use of the
download service to just things that are going to open outside of the
browser itself. This is because the browser can't tell what file type
a particular response is going to be, until it has already started
downloading. You know this, because you reported the bug. :-)
<http://launchpad.net/bugs/1500742>
How are you going to hand off some requests, and not others, to the
download service if you can't tell which ones they should be until the
response has already started downloading?
You might say, well, in those cases we'll just cancel the browser
download and get the download service to request it from scratch. But
that wouldn't just be wasting time and bandwidth. It would also be
sometimes causing the download to fail, because the download service
isn't able to provide those things I listed: cookies, Referer, POST
form payload etc.
And even if udm grew to include (or use libcurl for) all those
features, it still wouldn't work in the case of POST.
<http://launchpad.net/bugs/1405517> POST requests are not idempotent,
so you can't safely replay them at all.
<https://tools.ietf.org/html/rfc7231#section-4.2.2> It seems to me
that the only safe way for a download, resulting from a POST request,
to be handled by a download service is if the download service is
handling the initial request.
> - oxide (and chromium under its hood) has a highly optimized engine
> to perform HTTP requests, tailored especially for the browser
> use-case, that can handle a number of concurrent requests. It would
> be naive to think we can replicate that in udm without a
> significant effort and finite resources, if at all possible
>
> - that engine is not meant to be easily replaced in chromium,
> trying to do so would result in a significant amount of work
> (again, if at all possible), and an increased maintenance cost
What about the reverse possibility, then: Could udm use oxide whenever
the protocol is http: or https:?
> That doesn’t mean we shouldn’t try and think of clever solutions to
> this problem. Ensuring that a page is fully loaded before the
> corresponding process is suspended, to avoid CPU and data
> consumption later on when the browser regains focus, is desirable.
That would be another solution, yes. But it would mean special-casing
the browser app, waiting for its load() event. And with the Web in its
current state, "fully loaded" often means "30 seconds later, once the
last ad network on the page has finished eleventeen redirects and
rendering deeply nested iframes" -- which gives a lot of time in the
background for scripts on the page to be sucking on your CPU.
- --
mpt
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iEYEARECAAYFAlYXyckACgkQ6PUxNfU6ecp8vgCfXj9Y4IDkXfPhniw1yMaat/cW
oxIAn1YEhV6vq9/A/OU4vEn5wn1C/rXu
=CMxc
-----END PGP SIGNATURE-----
Follow ups
References