← Back to team overview

launchpad-dev team mailing list archive

Re: Build farm and the slave build id menagerie

 

On Tue, 2010-03-16 at 19:38 +0700, Jeroen Vermeulen wrote:
> Jonathan Lange wrote:
> 
> > The plan sounds good to me. It seems that you are missing key
> > information on what the actual threats and security requirements are.
> 
> Absolutely.  In fact, if anyone manages to exploit this for an attack, I 
> propose we hire them.

hmmm.
1st point. There is nothing that can make a system 100% secure. Anyone
who tries to tell you otherwise is peddling snake oil.
2nd point. Start with the assumption that you WILL get exploited. You're
dealing with risk management - the chance may be low, but it does exist.
The exploit may not be a technical one - bribery and corruption are
alive and well, sadly.
3rd point. Someone who has already broken trust by exploiting a
vulnerability is NOT a good character reference that you'd want to hire
them. Disclosing a vul'n, sure - they've demonstrated a measure of
trust. Exploiting? not so much.

Would you hire the locksmith who broke into your home, to put in better
grills/bars[1] and locks? Recognising that arguing via analogy is
terribly rude of me. :-)


> > I don't want to block what seems to be a useful simplifying change,
> > but were I you I'd consult James Troup, LaMont Jones or do some threat
> > analysis.
> 
> That's an interesting suggestion.  I am reluctant however to approach 
> these people with such an open-ended question.  From where I'm standing, 
> it'd be better for someone with a broader understanding of Soyuz to do 
> that and not waste their time so much.  For a thrill, try asking any 
> expert if this, like, computer code we have is, like, secure or not that 
> you don't fully understand but it does stuff, like, with servers and 
> such and could they give details!

heh, actually being more open ended and having a limited idea of the end
system is, honestly, a big plus in doing a threat analysis: Less trees,
more forest.

With an RA, you assume that any code being run is busted in some way;
and capture that in the RA; and hence solutions to system-architect
around those weaknesses (hopefully) leap out at you.
ie. making DOS PC's secure enough to process TOP SECRET information. The
O/S has no security features; so you architect the complete 'system'
around that vulnerability.

And quite depressingly, I've done way too many risk/threat analyses with
about as much starting info as you've provided above jtv. ;-)


> But one thing we do know is, security doesn't happen by accident.  The 
> existing mechanism floats somewhere between belt-plus-suspenders and a 
> chastity belt, and even if it's accidentally secure in some ways today, 
> the code will evolve to erode that.
> 
> So I proposed a replacement that is not perfect but (0) simpler, (1) 
> more secure than what we have,

At some risk of being obnoxious, but how can you know that it's more
secure? ie In line with the concept: Is spending $100K to secure a
system with info to the value of $10K more secure or less than only
spending $20K to secure it?
Ans. Both are invalid - trick question. :-)
ie. It's a dead easy trap to fall into to spend effort to secure parts
of a system that won't actually make you any more secure.


RA's aren't magic bullets by any stretch of the imagination, but they do
help with quantifying and thus prioritising "problems".


>  (2) deliberately there to produce a 
> hard-to-guess string, and (3a) easy to gut if we decide it doesn't 
> improve our security after all or (3b) easy to improve in isolation if 
> we decide that it does.

This is where the RA comes in handy. You have a list of priorities and
can thus accord value to outcomes and work effort.
ie. You can work on #3 if you can have it's complete cycle of
code->QA->CP done in X person hours; else we're better off spending the
resources on #4 & #5 together. Kinda mindset.


Cheers!
- Steve
[1] Not of the barbecue kind...




References