← Back to team overview

ubuntu-phone team mailing list archive

Re: The problem with "no background processing for apps"

 

On Fri, Oct 9, 2015 at 4:06 PM, Matthew Paul Thomas <mpt@xxxxxxxxxxxxx> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Olivier Tilloy wrote on 02/10/15 11:55:
>>
>> On Fri, Oct 2, 2015 at 12:10 PM, Matthew Paul Thomas
>> <mpt@xxxxxxxxxxxxx> wrote:
>>
>> ...
>>> A better solution, perhaps, would be to expand the Download
>>> Manager service.
>>> <https://developer.ubuntu.com/api/apps/qml/sdk-15.04/Ubuntu.DownloadManager.index/>
>>>
>>>
>>>
> First, make it accept more than just an URL. You can encode HTTP
>>> authentication into an URL, but you can't provide cookies, client
>>> certificates, a POST form payload, or even an HTTP Referer.
>>>
>>> Second, instead of making it an opt-in API for apps to use for "a
>>> long running connection", just make it automatic for every
>>> transfer. There's little point in halting a transfer because an
>>> app is going into the background; the app will usually just
>>> restart it again later, using more battery (and more data) in
>>> the long run.
>>>
>>> If those two things were done, the Web browser could use it for
>>> almost everything. So as long as the HTML of a page had been
>>> loaded, with the other resources requested, the rest of the page
>>> would load in the background.
>>
>> While that might sound like a solution to the problem on the paper,
>> it won’t fly, for several reasons:
>>
>> - afaik this is not part of the design goals of udm, so it might
>> require quite some refactoring (or even a complete rewrite) to
>> catter for this new use case
>
> I don't understand why udm is so limited by design, when the things I
> listed would be needed for the download service to handle even just
> browser downloads reliably. If that wasn't use case number 1, what was?
>
> Even if we wanted to, the browser can't easily limit its use of the
> download service to just things that are going to open outside of the
> browser itself. This is because the browser can't tell what file type
> a particular response is going to be, until it has already started
> downloading. You know this, because you reported the bug. :-)
> <http://launchpad.net/bugs/1500742>

This is not what that bug is about. Oxide knows what content it’s able
to handle, so when it emits a downloadRequested signal, we’re
guaranteed that it’s for an actual file to download. We might not know
the mime type in advance, but we know that we need to delegate the
download to an external service. The mime-type issue is a separate
one, which is being addressed by allowing the browser to download and
own arbitrary files, without having to pick a target application
upfront.


> How are you going to hand off some requests, and not others, to the
> download service if you can't tell which ones they should be until the
> response has already started downloading?
>
> You might say, well, in those cases we'll just cancel the browser
> download and get the download service to request it from scratch. But
> that wouldn't just be wasting time and bandwidth. It would also be
> sometimes causing the download to fail, because the download service
> isn't able to provide those things I listed: cookies, Referer, POST
> form payload etc.

POST requests may not be supported, but the rest already is: a
SingleDownload object accepts headers and metadata, which allow
passing cookies, a referer, and a user agent, among other things.


> And even if udm grew to include (or use libcurl for) all those
> features, it still wouldn't work in the case of POST.
> <http://launchpad.net/bugs/1405517> POST requests are not idempotent,
> so you can't safely replay them at all.
> <https://tools.ietf.org/html/rfc7231#section-4.2.2> It seems to me
> that the only safe way for a download, resulting from a POST request,
> to be handled by a download service is if the download service is
> handling the initial request.

It seems to me that initiating a file download as a response to a POST
request would be abusing the meaning of the verb. But I don’t know
that the HTTP specification prevents from doing this. Something we’d
need to test for sure, and file a bug if necessary.


>> - oxide (and chromium under its hood) has a highly optimized engine
>> to perform HTTP requests, tailored especially for the browser
>> use-case, that can handle a number of concurrent requests. It would
>> be naive to think we can replicate that in udm without a
>> significant effort and finite resources, if at all possible
>>
>> - that engine is not meant to be easily replaced in chromium,
>> trying to do so would result in a significant amount of work
>> (again, if at all possible), and an increased maintenance cost
>
> What about the reverse possibility, then: Could udm use oxide whenever
> the protocol is http: or https:?

What would be the benefit of doing that, if the download manager
already handles well file downloads (except maybe for those resulting
from a POST request, to be investigated)?


>> That doesn’t mean we shouldn’t try and think of clever solutions to
>> this problem. Ensuring that a page is fully loaded before the
>> corresponding process is suspended, to avoid CPU and data
>> consumption later on when the browser regains focus, is desirable.
>
> That would be another solution, yes. But it would mean special-casing
> the browser app, waiting for its load() event. And with the Web in its
> current state, "fully loaded" often means "30 seconds later, once the
> last ad network on the page has finished eleventeen redirects and
> rendering deeply nested iframes" -- which gives a lot of time in the
> background for scripts on the page to be sucking on your CPU.

True. That sounds like a complex problem to solve. If we delegated
resource fetching to an external service though, the browser would
need to handle resource expiration (the result of a request might not
make sense any longer for a given page 30min after it was issued, if
the browser was put in the background for that amount of time). That
might not be an issue for mostly-static content, but it would
certainly be one for highly dynamic webapps.

A better approach (maybe?) would be for oxide to pause entirely the
rendering while the fetching of resources finishes. This would still
require to special-case oxide in the lifecycle policy, and a
lifecycle-aware version of oxide, not something trivial to implement
(if at all possible without heavily modifying chromium).


Follow ups

References