[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 5/6] ioreq-server: add support for multiple servers


  • To: Ian Campbell <Ian.Campbell@xxxxxxxxxx>
  • From: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
  • Date: Mon, 17 Mar 2014 14:52:02 +0000
  • Accept-language: en-GB, en-US
  • Cc: "xen-devel@xxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxx>
  • Delivery-date: Mon, 17 Mar 2014 14:52:07 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>
  • Thread-index: AQHPOIHtl/Hs0EZRYkOvPLk7jZi8g5rgde+AgATKvVD///hXAIAAEWgA///zsYCAAB/kwP///vCAAAI7FwA=
  • Thread-topic: [Xen-devel] [PATCH v3 5/6] ioreq-server: add support for multiple servers

> -----Original Message-----
> From: Ian Campbell
> Sent: 17 March 2014 14:44
> To: Paul Durrant
> Cc: xen-devel@xxxxxxxxxxxxx
> Subject: Re: [Xen-devel] [PATCH v3 5/6] ioreq-server: add support for
> multiple servers
> 
> On Mon, 2014-03-17 at 13:56 +0000, Paul Durrant wrote:
> > > -----Original Message-----
> > > From: Ian Campbell
> > > Sent: 17 March 2014 12:54
> > > To: Paul Durrant
> > > Cc: xen-devel@xxxxxxxxxxxxx
> > > Subject: Re: [Xen-devel] [PATCH v3 5/6] ioreq-server: add support for
> > > multiple servers
> > >
> > > On Mon, 2014-03-17 at 12:51 +0000, Paul Durrant wrote:
> > >
> > > > > > > > diff --git a/tools/libxc/xc_domain_restore.c
> > > > > > > b/tools/libxc/xc_domain_restore.c
> > > > > > > > index 1f6ce50..3116653 100644
> > > > > > > > --- a/tools/libxc/xc_domain_restore.c
> > > > > > > > +++ b/tools/libxc/xc_domain_restore.c
> > > > > > > > @@ -746,6 +746,7 @@ typedef struct {
> > > > > > > >      uint64_t acpi_ioport_location;
> > > > > > > >      uint64_t viridian;
> > > > > > > >      uint64_t vm_generationid_addr;
> > > > > > > > +    uint64_t nr_ioreq_servers;
> > > > > > >
> > > > > > > This makes me wonder: what happens if the source and target
> hosts
> > > do
> > > > > > > different amounts of disaggregation? Perhaps in Xen N+1 we split
> > > some
> > > > > > > additional component out into its own process?
> > > > > > >
> > > > > > > This is going to be complex with the allocation of space for 
> > > > > > > special
> > > > > > > pages, isn't it?
> > > > > > >
> > > > > >
> > > > > > As long as we have enough special pages then is it complex?
> > > > >
> > > > > The "have enough" is where the complexity comes in though. If Xen
> > > > > version X needed N special pages and Xen X+1 needs N+2 pages then
> we
> > > > > have a tricky situation because people may well configure the guest
> with
> > > > > N.
> > > > >
> > > >
> > > > I don't quite follow. The specials are just part of the guest image
> > > > and so they get migrated around with that guest, so providing we know
> > > > how many special pages a guest had when it was created (so we know
> how
> > > > many there are to play with for secondary emulation) there's no
> > > > problem is there?
> > >
> > > What if the newer version of Xen requires more secondaries than the
> > > older one? That's the case I'm thinking of.
> > >
> >
> > I see. I guess the only other option is to put the pfns somewhere that
> > we can always grow (within reason). The guest itself never maps these
> > pfns, only the emulator, but they should be part of the guest's
> > allocation. Is there somewhere else in the p2m that they could live
> > such that we can grow the space even for migrated-in guests? Somewhere
> > just above the top of RAM perhaps?
> 
> It's always struck me as odd to have a Xen<->DM communication channel
> sitting there in guest pfn space (regardless of who the nominal owner
> is).
> 
> I don't suppose there is any way to pull these pages out of the guest
> pfn space while still accounting them to the guest. Or if there is it
> would probably be a whole other kettle of fish than this series.
> 

The closest analogy I can think of accounting-wise would be shadow pages. I'll 
have a look at how they are handled.

  Paul

> Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.