[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Security discussion: Summary of proposals and criteria



On Fri, Jul 06, 2012 at 09:46:48AM -0700, George Dunlap wrote:
> We've had a number of viewpoints expressed, and now we need to figure
> out how to move forward in the discussion.

Hi George,

Thanks for summarizing the discussion thus far. I think that everyone
agrees that the end goal is to improve security for everyone. The
varied opinions that show the careful thought that many people have
put into how best to accomplish the goal. 

Our diverse experiences and backgrounds leads each of us to an
approach that I believe is fundamentally similar, but have nuanced
differences. It is a sign of a strong community that important issues
can be hashed out in a way that accommodates as many viewpoints as
possible, leading to a decision that unifies more than it alienates.
I'm hopeful that this will happen here.

> One thing we all seem to agree on is that with regard to the public
> disclosure and the wishes of the discloser:
> * In general, we should default to following the wishes of the discloser
> * We should have a framework available to advise the discloser of a
> reasonable embargo period if they don't have strong opinions of their
> own (many have listed the oCERT guidelines)
> * Disclosing early against the wishes of the disclosure is possible if
> the discloser's request is unreasonable, but should only be considered
> in extreme situations.

I agree with the first two bullets. On the third, I think that it is a
very unusual circumstance for a discoverer to make a request that is
categorized as unreasonable by the security process. For example, I
think that it is reasonable for a discoverer to request that a
different group take over the coordination of an issue if it is
discovered that the vulnerability reaches beyond Xen projects. Should
this occur in the future, I think that Xen should honor the request
and work with the new coordinator.

I think that most Computer Security Incident Response Teams (CSIRTs)
that are likely to act as coordinator have default disclosure
timelines that are compatible with Xen's goal of providing timely
security updates.

On the other hand if a discoverer requests a disclosure date six
months in the future because they want to announce at a security
conference, it might be considered "unreasonable." I hope that this is
rare; I'd imagine that most security researchers prefer for issues to
be fixed sooner rather than later, just as Xen does.

> What next needs to be decided, it seems to me, is concerning
> pre-disclosure: Are we going to have a pre-disclosure list (to whom we
> send details before the public disclosure), and if so who is going to
> be on it?  Then we can start filling in the details.
> 
> What I propose is this.  I'll try to summarize the different options
> and angles discussed.  I will also try to synthesize the different
> arguments people have made and make my own recommendation.  Assuming
> that no creative new solutions are introduced in response, I think we
> should take an anonymous "straw poll", just to see what people think
> about the various options.  If that shows a strong consensus, then we
> should have a formal vote.  If it does not show consensus, then we'll
> at least be able to discuss the issue more constructively (by avoiding
> solutions no one is championing).

I think that we should attempt to come to some consensus before taking
a straw poll. There are too many options below, and I think that we
should be able to eliminate some of them through open discussion.
Also, I wonder why the straw poll should be anonymous. It seems that
we should be able to come to a quick lazy consensus on the actual
process text changes through email replies of -1 / 0 / +1. I think
that once we have new proposed text there should be a formal vote to
ratify it.

> So below is my summary of the options and the criteria that have been
> brought up so far.  It's fairly long, so I will give my own analysis
> and recommendation in a different mail, perhaps in a day or two.  I
> will also be working with Lars to form a straw poll where members of
> the list can informally express their preference, so we can see where
> we are in terms of agreement, sometime over the next day or two.
> 
> = Proposed options =
> 
> At a high level, I think we basically have five options to consider.
> 
> In all cases, I think that we can make a public announcement that
> there *is* a security vulnerability, and the date we expect to
> publicly disclose the fix, so that anyone who has not been disclosed
> to non-publicly can be prepared to apply it as soon as possible.

I've not seen this path taken by other open source projects, nor
commercial software vendors. I think that well written security
advisories that are distributed broadly (i.e., sent to bugtraq,
full-disclosure, oss-security, and regional CSIRTs if warranted) are
effective alert mechanisms for users. As an aside, JPCERT/CC published
some nice guidelines for software providers on how best to format
advisories:
  http://www.jpcert.or.jp/english/vh/2009/vuln_announce_manual_en2009.pdf

> 1. No pre-disclosure list.  People are brought in only to help produce
> a fix.  The fix is released to everyone publicly when it's ready (or,
> if the discloser has asked for a longer embargo period, when that
> embargo period is up).
> 
> 2. Pre-disclosure list consists only of software vendors -- people who
> compile and ship binaries to others.  No updates may be given to any
> user until the embargo period is up.
> 
> 3. Pre-disclosure list consists of software vendors and some subset of
> privleged users (e.g., service providers above a certain size).
> Privileged users will be provided with patches at the same time as
> software vendors.  However, they will not be permitted to update their
> systems until the embargo period is up.
> 
> 4. Pre-disclosure list consists of software vendors and privileged
> users. Privleged users will be provided with patches at the same time
> as software vendors.  They will be permitted to update their systems
> at any time.  Software vendors will be permitted to send code updates
> to service providers who are on the pre-disclosure list.  (This is the
> status quo.)
> 
> 5. Pre-disclsoure list is open to any organiation (perhaps with some
> minimal entrance criteria, like having some form of incorporation, or
> having registered a domain name).  Members of the list may update
> their systems at any time; software vendors will be permitted to send
> code updates to anyone on the pre-disclosure list.
> 
> 6. Pre-disclosure list open to any organization, but no one permitted
> to roll out fixes until the embargo period is up.

I think that option 1 effectively abandons much of the benefits of a
coordinated/responsible security disclosure in an open source
context. I don't think that a patch from Xen.org provides an
immediately consumable remediation for a security issue that users
need.

Of the remaining options, to me it seems that a refinement of the
status quo is in order. I don't think that the current policy is
fundamentally flawed. If we approach the problem the same way as code,
wouldn't an iterative approach make sense here rather than a rewrite?

I'll say again that I think that the "software provider" versus
"service provider" distinction is artificial. Some software providers
will undoubtedly be using their software in both private and public
installations. Some service providers will be providing software as a
service.

> = Criteria =
> 
> I think there are several criteria we need to consider.
> 
> * _Risk of being exploited_.  The ultimate goal any pre-disclosure
> process is to try to minimize the total risk for users of being
> exploited.  That said, any policy decision must take into account both
> the benefits in terms of risk reduction as well as the other costs of
> implementing the policy.

If there is an unreasonable cost difference to implement one solution
that improves security for users more than another, it deserves to be
called out.

> To simplify things a bit, I think there are two kinds of risk.
> Between the time a vulnerability has been publicly announced and the
> time a user patches their system, that user is "publicly vulnerable"
> -- running software that contains a public vulnerability.  However,
> the user was vulnerable before that; they were vulnerable from the
> time they deployed the system with the vulnerability.  I will call
> this "privately vulnerable" -- running software that contains a
> non-public vulnerability.
> 
> Now at first glance, it would seem obvious that being publicly
> vulnerable carries a much higher risk of being privately vulnerable.
> After all, to exploit a vulnerability you need to have malicious
> intent, the skills to leverage a vulnerability into an exploit, and
> you need to know about a vulnerability.  By announcing it publicly, a
> much greater number of people with malicious intent and the requisite
> skills will now know about the vulnerability; surely this increases
> the chances of someone being actually exploited.

Indeed, this is something that Bruce Schneier explored nearly twelve
years ago in his Crypto-Gram Newsletter on full disclosure:
  http://www.schneier.com/crypto-gram-0009.html#1

In all the time since Bruce wrote this article, I don't think that the
arguments have substantially changed. We end up rehashing the same
points, sometimes using different terminology.
"""
    The problem is that for the most part, the size and shape of the
    window of exposure is not under the control of any central
    authority. Not publishing a vulnerability is no guarantee that
    someone else won't publish it. Publishing a vulnerability is no
    guarantee that someone else won't write an exploit tool, and no
    guarantee that the vendor will fix it. Releasing a patch is no
    guarantee that a network administrator will actually install
    it. Trying to impose rules on such a chaotic system just doesn't
    work.
"""

However, one development during the past 12 years of arguing is the
idea of responsible/coordinated disclosure as a middle ground for
addressing vulnerabilities in a more controlled way. By and large I
think that coordinated disclosure is the best approach available, and
we should look to incorporate established best practices by other
organizations who have traveled this road before us.

> However, one should not under-estimate the risk of private
> vulnerability.  Black hats prize and actively look for vulnerabilities
> which have not yet been made public.  There is, in fact, a black
> market for such "0-day" exploits.  If your infrastructure is at all
> valuable, black hats have already been looking for the bug which makes
> you vulnerable; you have no way of knowing if they have found it yet
> or not.
> 
> In fact, one could make the argument that publicly announcing a
> vulnerability along with a fix makes the vulnerability _less_ valuable
> to black-hats.  Developing an exploit from a vulnerability requires a
> significant amount of effort; and you know that security-conscious
> service providers will be working as fast as possible to close the
> hole.  Why would you spend your time and energy for an exploit that's
> only going to be useful for a day or two at most?

I think that the only responsible approach is to assume that a
malicious actor will undoubtedly expend effort to take advantage of
any window of opportunity available to them, even if that window is
only minutes long.

> Ultimately the only way to say for sure would be to talk to people who
> know the black hat community well.  But we can conclude this: private
> vulnerability is a definite risk which needs to be considered when
> minimizing total risk.
> 
> Another thing to consider is how the nature of the pre-disclosure and
> public disclosure affect the risk.  For pre-disclosure, the more
> individuals have access to pre-disclosure information, the higher the
> risk that the information will end up in the hands of a black-hat.
> Having a list anyone can sign up to, for instance, may be very little
> more secure than a quiet public disclosure.

Right. This goes back to two points Bruce made back in 2000 on
attempts to reduce the window of exposure for a vulnerability: 1)
limit knowledge of the vulnerability and 2) limit the duration of the
window. He was speaking more to the secrecy approach versus full
disclosure, but I think that the points still apply here. His
conclusion is also as correct today as it was in 2000: "the debate has
no solution because there is no one solution."

> For public disclosure, the nature of the disclosure may affect the
> risk, or the perception of risk, materially.  If the fix is simply
> checked into a public repository without fanfare or comment, it may
> not raise the risk of public vulnerability significantly; while if the
> fix is announced in press releases and on blogs, the _perception_ of
> the risk will undoubtedly increase.

I do not think that silent check-ins provide adequate notice for users
that a security vulnerability has been addressed. You're right that
the perception of risk may increase, and to me that seems a reasonable
price for providing clear guidance to consumers of Xen projects.

> * _Fairness_.  Xen is a community project and relies on the good-will
> of the community to continue.  Giving one sub-group of our users an
> advantage over another sub-group will be costly in terms of community
> good will.  Furthermore, depending on what kind of sub-group we have
> and how it's run, it may well be considered anti-competitive and
> illegal in some jurisdictions.  Some might say we should never
> consider such a thing.  At very least, doing so should be very
> carefully considered to make sure the risk is worth the benefit.
>
> The majority of this document will focus on the impact of the policy
> on actual users.  However, I think it is also legitimate to consider
> the impact of the policies on software vendors as well.  Regardless of
> the actual risk to users, the _perception_ of risk may have a
> significant impact on the success of some vendors over others.
> 
> It is in fact very difficult to achieve perfect fairness between all
> kinds of parties.  However, as much as possible, unfairness should be
> based on decisions that the party themselves have a reasonable choice
> about.  For instance, having a slight advantage to compiling your own
> hypervisor directly from xen.org rather than using a software vendor
> might be tolerable because 1) those receiving from software vendors
> may have other advantages not available to those consuming directly,
> and 2) anyone can switch to pulling directly from xen.org if they
> wish.

I think that any concerns about fairness should be raised now by the
parties that feel they are impacted, as part of the open discussion,
rather than as speculation.
 
> * _Administrative overhead_.  This comprises a number of different
> aspects: for example, how hard is it to come up with a precise and
> "fair" policy?  How much effort will it be for xen.org to determine
> whether or not someone should be on the list?

The transparent list request process for the distros email list seems
low impact.

> Another question has to do with robustness of enforcement.  If there
> is a strong incentive for people on the list to break the rules
> ("moral hazard"), then we need to import a whole legal framework: how
> do we detect breaking the rules?  Who decides that the rules have
> indeed been broken, and decides the consequences?  Is there an appeals
> process?  At what point is someone who has broken the rules in the
> past allowed back on the list?  What are the legal, project, and
> community implications of having to do this, and so on?  All of this
> will impose a much heavier burden on not only this discussion, but
> also on the xen.org security team.

I think that before we become too concerned about rules and
enforcement, we should have a better sense of what rules are in the
best interest of the largest population of users possible. I don't
feel that we've discussed this enough.

You're right that there's likely no solution that everyone is going to
feel is "fair." But we should be able to come up with a proposal that
everyone agrees improves security for the most users.

This is a trade-off that is made constantly in coordinated response
activities. If there's a commonly embedded bit of code, for example
zlib, that many software vendors ship in their products, it will be
impossible to do a coordinated disclosure among every single one of
them. Typically a response is coordinated among as many parties that
can be reasonably handled while preserving some confidence that leaks
will be minimized.

For a recent example of this, see the LWN article on the DES-based
crypt() coordination posted last month: http://lwn.net/Articles/500444/

> (Disclaimer: I am not a lawyer.) It should be noted that because of
> the nature of the GPL, we cannot impose additional contractual
> limitations on the re-distribution of a GPL'ed patch, and thus we
> cannot seek legal redress for those who re-distribute such a patch (or
> resulting binaries) in a way that violates the pre-disclosure policy.
> But for the purposes of this discussion, I am going to assume that we
> can, however, choose to remove them from the pre-disclosure list as a
> result.

I'm also not a lawyer. I don't see any reason why removing someone
from a mailing list would be prohibited by the GPL. But again, before
we decide that such rules should be in place we need to consider if
that is the best interest of improving security.

> I think those cover the main points that have been brought up in the
> discussion.  Please feel free to give feedback.  Next week probably I
> will attempt to give an analysis, applying these criteria to the
> different options.  I haven't yet come up with what I think is a
> satisfactory conclusion yet.

Thanks again for writing this up. I'm looking forward to your analysis.

By the way, I'll be at OSCON next week. I'd love to meet up and talk
in person.

Matt


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.