[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2/2] x86/hvm/ioreq: allow ioreq servers to use HVM_PARAM_[BUF]IOREQ_PFN
> -----Original Message----- > From: Jan Beulich [mailto:JBeulich@xxxxxxxx] > Sent: 08 October 2018 15:59 > To: Paul Durrant <Paul.Durrant@xxxxxxxxxx> > Cc: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Wei Liu > <wei.liu2@xxxxxxxxxx>; xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx> > Subject: RE: [PATCH 2/2] x86/hvm/ioreq: allow ioreq servers to use > HVM_PARAM_[BUF]IOREQ_PFN > > >>> On 08.10.18 at 16:38, <Paul.Durrant@xxxxxxxxxx> wrote: > >> -----Original Message----- > >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx] > >> Sent: 08 October 2018 14:29 > >> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx> > >> Cc: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Wei Liu > >> <wei.liu2@xxxxxxxxxx>; xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx> > >> Subject: Re: [PATCH 2/2] x86/hvm/ioreq: allow ioreq servers to use > >> HVM_PARAM_[BUF]IOREQ_PFN > >> > >> >>> On 05.10.18 at 15:43, <paul.durrant@xxxxxxxxxx> wrote: > >> > Since commit 2c257bd6 "x86/hvm: remove default ioreq server (again)" > the > >> > GFNs allocated by the toolstack and set in HVM_PARAM_IOREQ_PFN and > >> > HVM_PARAM_BUFIOREQ_PFN have been unused. This patch allows them to be > >> used > >> > by (non-default) ioreq servers. > >> > > >> > NOTE: This fixes a compatibility issue. A guest created on a version > of > >> > Xen that pre-dates the initial ioreq server implementation and > >> then > >> > migrated in will currently fail to resume because its migration > >> > stream will lack values for HVM_PARAM_IOREQ_SERVER_PFN and > >> > HVM_PARAM_NR_IOREQ_SERVER_PAGES *unless* the system has an > >> > emulator domain that uses direct resource mapping (which > depends > >> > on the version of privcmd it happens to have) in which case it > >> > will not require use of GFNs for the ioreq server shared > >> > pages. > >> > >> Meaning this wants to be backported till where? > >> > > > > This fix is 4.12 specific because it is predicated on removal of default > > ioreq server support. > > Ah, good. > > >> > A similar compatibility issue with migrated-in VMs exists with Xen > 4.11 > >> > because the upstream QEMU fall-back to use legacy ioreq server was > >> broken > >> > when direct resource mapping was introduced. > >> > This is because, prior to the resource mapping patches, it was the > >> creation > >> > of the non-default ioreq server that failed if GFNs were not > available > >> > whereas, as of 4.11, it is retrieval of the info that fails which > does > >> not > >> > trigger the fall-back. > >> > >> Are you working on a fix or workaround for this, too, then? > >> > > > > Not yet. I'm not sure how to approach this. There are a few options: > > > > 1. Backport default IOREQ server removal and this fix > > 2. Do a bespoke 4.11 fix that forces IOREQ server creation to fail if > there > > are no GFNs available, thus triggering the default IOREQ server fallback > in > > QEMU. > > 3. Upstream a fix into QEMU to do a fallback at the point that it fails > to > > get GFNs i.e. have it close its IOREQ server and then fall back to > default. > > > > The problem with 1 is that it breaks qemu trad. 2 is probably simplest, > but > > if the emualator can do resource mapping it is unnecessary. 3 is > probably > > best, but it's not our fix to deliver. > > > > Thoughts? > > 2 indeed looks best to me then. Though I'm not sure I understand > what you say correctly: Would triggering the default IOREQ server > fallback be a step backwards, if the emulator is capable and able to > use resource mapping? Yes. With resource mapping the emulator doesn't rely in pages on special GFNs and so it would indeed be a step backwards... > If so, somehow avoiding this would of > course be nice, and I'd then assume this isn't reasonable to achieve > without a qemu side change, in which case the solution wouldn't be > any better than 3 anymore. > ...but avoiding would indeed mean a change in QEMU. Given that the chances of having a VM migrated all the way in from some ancient Xen without a reboot along the way at any point is probably quite small, and that no-one is likely to notice a fall-back to default IOREQ server unless they are really looking for it, I'd say let's go with 2. Paul > Jan > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |