← Back to team overview

duplicity-team team mailing list archive

Re: [Merge] lp:~mterry/duplicity/early-catch-498933 into lp:duplicity

 

> because it is the more elegant approach. you were worrying about too much info by default. ok, so
> we know which info we need, so let's request it. if we can't get it, we'll get a no key=>value pair
> for it, so we can deal with it.

My point was just, no matter what, some backends won't provide the key=>value pair for, say, size (likely because we haven't yet modified the backend to provide it).  So, as you say, no matter what the API looks like, callers have to check to see if the backend provided size.

So it makes more sense to me to have a global expectation of what get_info provides (for now, just size, but maybe later, size+last_mod), rather than a per-call expectation.  That is, every time get_info is called, provide all the info that duplicity maintainers have found a use for so far.  It just seemed like overkill to expect that we would grow so many bits of info that it would be particularly effective to have to spell out what we want.

But ultimately, that's not a big sticking point, I was just suggesting.

> in this case it means we add a new interface like get_info() that backends are free to implement.
> if they do, some extra duplicity functionality kicks in. if not, then it does not ;).

But no reason to invent some complicated thing that all backends have to support.  Why not do get_info now, and if we ever find an actual use for a list of info and we want to allow backends to optimize it, we can add a new get_list_info that certain backends can provide, which if found will be used, else we can fake it, iterating over a file list, calling get_info on each entry.

I'm just saying, we can optimize later for this problem we don't even have right now, rather than requiring all backends to implement a more complicated function.

> why do we actually log an error here? shouldn't we try num-retries to put the file on the backend
> and only fail if that does not work out?

Backends already do try multiple times.  But it makes sense that if we havne't yet tried the maximum number yet, retry until we do hit the cap.
-- 
https://code.launchpad.net/~mterry/duplicity/early-catch-498933/+merge/72607
Your team duplicity-team is requested to review the proposed merge of lp:~mterry/duplicity/early-catch-498933 into lp:duplicity.


Follow ups

References