[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [stage1-xen PATCH v1 04/10] build/fedora: Add `run` and `components/*` scripts



On Sat, 9 Sep 2017, Rajiv Ranganath wrote:
> On Thu, Sep 07 2017 at 12:29:54 AM, Stefano Stabellini 
> <sstabellini@xxxxxxxxxx> wrote:
> 
> [...]
> 
> >> +QEMU_BRANCH = 'master'
> >
> > I am not sure we want to checkout always the latest QEMU. It is a
> > running target. It makes sense to use one of the latest releases
> > instead, such as v2.10.0?
> >
> 
> [...]
> 
> I feel once we have an understanding around what stable xen container
> experience for our users should be, it makes a lot of sense to support
> two stable versions (on a rolling basis) along with unstable/devel
> versions of xen, qemu and rkt.

Yes, I think that would be ideal too.


> I am hoping we can include the following before adding support for
> stable version.
> 
> 1. Kernel - PV Calls backend support will be in 4.14, which is few
> months away.
> 
> 2. PVHv2 - xl and PVHv2 support is inflight for 4.10. I would like to
> see xen container users start off with PVHv2 and using PV Calls
> networking. Therefore I am a bit hesitant adding support for Xen 4.9.

Yes, that would fantastic. Fortunately, from the stage1-xen code point
of view, there is very little difference between PVHv2 and PV. Switching
from one to the other should be a matter of adding one line to the xl
config file.

Regarding statements of support, see below.


> 3. Multiboot2 - One of the reasons why I documented using EFI is because
> I could not get multiboot2 to work. It looks like the fix for it is on
> its way. I anticipate using multiboot2 would be easier for users.

That's for the host right? I didn't have that problem, but maybe because
I am not using Fedora.


> 4. Rkt - Support for Kubernetes CRI and OCI image format will be of
> importance to our users. Rkt is working on it but I'm not sure of their
> progress. There are other projects that are also incubating in CNCF -
> cri-o and cri-containerd.
> 
> PV Calls networking is new to me, and I wanted to do some prototyping to
> understand how it would integrate with the rest of the container
> ecosystem it after landing this series.
> 
> By adding support for xen-4.9, qemu-2.10 or rkt-1.28.1 I feel we should
> not set some kind stability or backward compatibility expectations
> around stage1-xen as yet.

I agree we should not set any kind of backward compatibility
expectations yet. See below.


> My preference would be to keep things on master (albeit deliberately)
> till we can figure out a good xen container experience for our users.
> 
> Please let me know what you think.

You have a good point. I think we should be clear about the stability of
the project and the backward compatibility in the README. We should
openly say that it is still a "preview" and there is no "support" or
"compatibility" yet.

Choosing Xen 4.9 should not be seen as a statement of support. I think
we should choose the Xen version based only on the technical merits.

In the long term it would be great to support multiple stable versions
and a development version of Xen. As of now, I think it makes sense to
have an "add-hoc approach": I would use Xen 4.9 just because it is the
best choice at the moment. Then, I would update to other versions when
it makes sense, manually. I don't think that building against a changing
target ("master") is a good idea, because we might end up stumbling
across confusing and time-consuming bugs that have nothing to do with
stage1-xen. However, we could pick a random commit on the Xen tree if
that's convenient for us, because at this stage there is no support
really. For example, PVCalls will require some tools changes in Xen.
Once they are upstream, we'll want to update the Xen version to the
latest with PVCalls support.

Does it make sense?

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.