[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/4] x86/MSI-X: be more careful during teardown



On Mon, 13 Apr 2015, Jan Beulich wrote:
> >>> On 13.04.15 at 14:01, <stefano.stabellini@xxxxxxxxxxxxx> wrote:
> > On Mon, 13 Apr 2015, Jan Beulich wrote:
> >> >>> On 13.04.15 at 12:50, <stefano.stabellini@xxxxxxxxxxxxx> wrote:
> >> > On Mon, 13 Apr 2015, Jan Beulich wrote:
> >> >> While we trust Dom0 to not do outright bad things,
> >> >> the hypervisor should still avoid doing things that can go wrong
> >> >> due to the state a device is put (or left) in by Dom0.
> >> > 
> >> > Xen should also avoid doing things that can go wrong because of the
> >> > state a device is put in by QEMU or other components in the system.
> >> > There isn't much room for Xen to play with.
> >> 
> >> Qemu is either part of Dom0, or doesn't play with devices directly.
> > 
> > I don't understand the point you are trying to make here.
> > If your intention is to point out that QEMU shouldn't be writing to the
> > control register, as I wrote earlier, the current codebase disagrees
> > with you and has been that way for years.
> 
> Yes, this was precisely my intention. Qemu having done a certain
> thing for many years doesn't mean this is correct, and shouldn't
> be fixed if broken.

Sure, as long as the fix doesn't cause regressions.

Do you have a specific example of how QEMU's behaviour is "broken" and
this patch is supposed to correct it?  I thought you were trying to
work-around a dom0 kernel issue with this patch, not a QEMU bug.

Working-around a Dom0 kernel bug by creating a QEMU bug is not a great
move.


> >> > And how are we going to deal with older "unfixed" QEMUs?
> >> > So far we have been using the same policy for QEMU and the Dom0 kernel:
> >> > Xen doesn't break them -- old Linux kernels and QEMUs are supposed to
> >> > just work.
> >> 
> >> I'm not sure that's really true for qemu, or if it is, then only by pure
> >> luck:
> > 
> > This is false, it is so by design.  QEMU has an internal libxc
> > compatibility layer.
> 
> Which could necessarily only ever be updated after the fact (of
> a libxc change) and only ever in maintained branches. I.e. as
> widely or as narrow as fixing the issues here would require.

True, in fact we used it with older Xen releases, when upstream QEMU
wasn't the default device model yet. Nothing from Xen 4.2 onward has
required a change in QEMU and its compatibility shim and I would like to
keep it that way (note that Paul's ioreq server changes are not required
to build older QEMU versions against new Xen versions), unless we have a
very compelling reason.


> >> I view it as unavoidable to break older qemu here.
> > 
> > I disagree and I am opposed to any patches series that would
> > deliberately break QEMU.
> 
> If that was without also adjusting qemu I would understand you.

You say that, but the initial submission only included the Xen
hypervisor changes.

I am not completely against stopping QEMU from writing to the control
register, but we cannot go ahead and commit patches to Xen that break
QEMU compatibility willy nilly.

At least we need to:

- review the QEMU and Xen patches together
- commit the Xen patches only after the QEMU patches have been reviewed
and acked
- we should consider making the QEMU changes acceptable for QEMU stable
trees and mark them for backport
- we need to write down which is the minimal version of QEMU that can
  work for PCI passthrough


> Considering that the issues here haven't been brought up just
> yesterday, the lack of any substantial counter proposal on how
> to fix this is signaling to me that there are little alternatives. Or
> do you have any going beyond "do it another way"?

TBH in case of this issue, I think that having the Dom0 kernel do the
right thing is good enough.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.