[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 5/6] ioreq-server: add support for multiple servers


  • To: George Dunlap <George.Dunlap@xxxxxxxxxx>
  • From: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
  • Date: Tue, 18 Mar 2014 13:45:59 +0000
  • Accept-language: en-GB, en-US
  • Cc: Ian Campbell <Ian.Campbell@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxx>
  • Delivery-date: Tue, 18 Mar 2014 13:46:22 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>
  • Thread-index: AQHPOIHtl/Hs0EZRYkOvPLk7jZi8g5rgde+AgATKvVD///hXAIAAEWgA///zsYCAAB/kwP///vCAAAI7FwD///FegP/+mgYggALeu4D//+vnoP//1UZg
  • Thread-topic: [Xen-devel] [PATCH v3 5/6] ioreq-server: add support for multiple servers

> -----Original Message-----
> From: Paul Durrant
> Sent: 18 March 2014 13:39
> To: 'George Dunlap'
> Cc: xen-devel@xxxxxxxxxxxxx; Ian Campbell
> Subject: RE: [Xen-devel] [PATCH v3 5/6] ioreq-server: add support for
> multiple servers
> 
> > -----Original Message-----
> > From: dunlapg@xxxxxxxxx [mailto:dunlapg@xxxxxxxxx] On Behalf Of
> > George Dunlap
> > Sent: 18 March 2014 13:24
> > To: Paul Durrant
> > Cc: xen-devel@xxxxxxxxxxxxx; Ian Campbell
> > Subject: Re: [Xen-devel] [PATCH v3 5/6] ioreq-server: add support for
> > multiple servers
> >
> > On Tue, Mar 18, 2014 at 11:33 AM, Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> > wrote:
> > >> -----Original Message-----
> > >> From: Ian Campbell
> > >> Sent: 17 March 2014 14:56
> > >> To: Paul Durrant
> > >> Cc: xen-devel@xxxxxxxxxxxxx
> > >> Subject: Re: [Xen-devel] [PATCH v3 5/6] ioreq-server: add support for
> > >> multiple servers
> > >>
> > >> On Mon, 2014-03-17 at 14:52 +0000, Paul Durrant wrote:
> > >> > > -----Original Message-----
> > >> > > From: Ian Campbell
> > >>
> > >> > > It's always struck me as odd to have a Xen<->DM communication
> > channel
> > >> > > sitting there in guest pfn space (regardless of who the nominal
> owner
> > >> > > is).
> > >> > >
> > >> > > I don't suppose there is any way to pull these pages out of the guest
> > >> > > pfn space while still accounting them to the guest. Or if there is it
> > >> > > would probably be a whole other kettle of fish than this series.
> > >> > >
> > >> >
> > >> > The closest analogy I can think of accounting-wise would be shadow
> > >> > pages. I'll have a look at how they are handled.
> > >>
> > >> I think the big difference is that noone outside Xen needs to be able to
> > >> refer to a shadow page, whereas the device models need some sort of
> > >> handle onto the ring to be able to map them etc. Not insurmountable I
> > >> suppose.
> > >>
> > >
> > > Probably not, but it's looking like it will be a bit of a can of worms. 
> > > Are you
> > ok with sticking to base+range HVM params for secondary emulators that
> can
> > potentially be moved on migration for now? I.e. the save image just
> contains
> > a count. There's still some growth room in the existing area (all pages from
> > FE800000 to FF000000 AFACT) so as long as - as George said - we don't bake
> > the PFN layout in, I don't think we preclude moving the emulator PFNs
> > around in future.
> >
> > xentrace has to share pages between Xen and dom0; it just exposes an
> > interface for dom0 to get the mfn and then maps those mfns.  Couldn't
> > you do something similar?  When you create an ioreq server, Xen could
> > allocate the pages internally; and then you could use
> > hvm_get_ioreq_server_info to get the MFNs, and
> xc_map_foreign_range()
> > to map them.  Am I missing something?
> >
> 
> If you use xc_map_foreign_range() then presumably the page is still in the
> p2m. AFAIK the value supplied to xc_map_foreign_range() is still a guest
> frame number isn't it?

Ah, I didn't know about mapping using DOMID_XEN. That looks ok then :-)

  Paul

> 
>   Paul

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.