[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Notes for xen summit 2018 design session] Process changes: is the 6 monthly release Cadence too short, Security Process, ...

On Mon, Jul 02, 2018 at 06:03:39PM +0000, Lars Kurth wrote:
> ### Security Process
> *Batches and timing:* Everyone present, felt that informal batching is good 
> (exception Doug G), 

fwiw, I don't dislike the batching. I just complained when there's a lot
of items in the batch. We attempt to live patch every issue and have
that ready to go when the embargo drops. When there are multiple XSAs
we each grab one to work on but depending on the size of the batch and
the current workload of the team there might be one that has no staffing
available. This obviously puts a bit of strain on since whoever finishes
with one first needs to grab that last one. Then quite often at least
one XSA has revisions to the patch during the process which requires
additional work and suddenly we're swamped. It was more an off the cuff
remark about big batches than to really be note worthy as a formal
objection to batching.

> Again, there was a sense that some of the issues we are seeing could be 
> solved if we had better 
> CI capability: in other words, some of the issues we were seeing could be 
> resolved by
> * Better CI capability as suggested in the Release Cadence discussion
> * Improving some of the internal working practices of the security team
> * Before we commit to a change (such as improved batching), we should try 
> them first informally. 
>   E.g. the security team could try and work towards more predictable dates 
> for batches vs. a 
>   concrete process change

My feeling on CI is clear in this thread and other threads. But I think
what would help OSSTEST bottlenecks if we do better at separating up
different parts of the testing process into more parallel tasks that
also provide feedback to the contributor faster. I'll obviously never
suggest the GitHub/GitLab PR/MR model to a ML driven project because I
wouldn't survive the hate mail but there is something that those models
do provide. A lot of work can be pushed back onto the contributor in an
automatic fashion instead of on the reviewer. The Rust project is a
decent model here. They only accept code contributions via a GitHub PR
but their process causes the submission to immediately be run against
code style checks, a build test on all their supported platforms and
then a number of unit tests are done over the entire code base. Lastly
they have a bot assign a random maintainer from that part of the code
base to review the submission. Ultimately the way Xen works the first
three steps are up to the reviewer to validate and the last one is
manually up to the contributor (and should they make a mistake the
reviewer needs to chime up).

The biggest boon to our review process would be to automate away a bunch
of these tasks because our reviewers are human and things are missed.
Many aren't even the fault of the reviewer doing a poor job. e.g. the
code change breaks the built with a GCC newer than the reviewer had

> Note that we did not get to the stable baseline discussion: but it was 
> highlighted that several 
> members of the security team also wear the hat of distro packagers for Debian 
> and CentOS and 
> are starting to feel pain.

To me the hardship comes from the fact that security patches apply
against the staging branch for that release (e.g. staging-4.10) but not
necessarily to the last release. Steven Haigh has brought this up as
well. This leaves each downstream responsible for backporting the
security patch against the release they shipped. Which has caused a
number of distros to bow out of providing security updates for Xen.
Yocto (via meta-virt) and Ubuntu are two notable ones that don't update
for XSAs. Gentoo is typically treated as a best effort depending on how
much time that maintainer has.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.