[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v17 06/11] x86/hvm/ioreq: add a new mappable resource type...



> -----Original Message-----
> From: Xen-devel [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxxx] On Behalf
> Of Paul Durrant
> Sent: 03 January 2018 16:48
> To: 'Jan Beulich' <JBeulich@xxxxxxxx>
> Cc: StefanoStabellini <sstabellini@xxxxxxxxxx>; Wei Liu
> <wei.liu2@xxxxxxxxxx>; Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Tim
> (Xen.org) <tim@xxxxxxx>; George Dunlap <George.Dunlap@xxxxxxxxxx>;
> JulienGrall <julien.grall@xxxxxxx>; Ian Jackson <Ian.Jackson@xxxxxxxxxx>;
> xen-devel@xxxxxxxxxxxxxxxxxxxx
> Subject: Re: [Xen-devel] [PATCH v17 06/11] x86/hvm/ioreq: add a new
> mappable resource type...
> 
> > -----Original Message-----
> > From: Xen-devel [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxxx] On
> Behalf
> > Of Jan Beulich
> > Sent: 03 January 2018 16:41
> > To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> > Cc: StefanoStabellini <sstabellini@xxxxxxxxxx>; Wei Liu
> > <wei.liu2@xxxxxxxxxx>; Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>;
> Tim
> > (Xen.org) <tim@xxxxxxx>; George Dunlap <George.Dunlap@xxxxxxxxxx>;
> > JulienGrall <julien.grall@xxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx; Ian
> > Jackson <Ian.Jackson@xxxxxxxxxx>
> > Subject: Re: [Xen-devel] [PATCH v17 06/11] x86/hvm/ioreq: add a new
> > mappable resource type...
> >
> > >>> On 03.01.18 at 17:06, <Paul.Durrant@xxxxxxxxxx> wrote:
> > >>  -----Original Message-----
> > >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> > >> Sent: 03 January 2018 15:48
> > >> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> > >> Cc: JulienGrall <julien.grall@xxxxxxx>; Andrew Cooper
> > >> <Andrew.Cooper3@xxxxxxxxxx>; Wei Liu <wei.liu2@xxxxxxxxxx>; George
> > >> Dunlap <George.Dunlap@xxxxxxxxxx>; Ian Jackson
> > <Ian.Jackson@xxxxxxxxxx>;
> > >> Stefano Stabellini <sstabellini@xxxxxxxxxx>; xen-
> > devel@xxxxxxxxxxxxxxxxxxxx;
> > >> Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>; Tim (Xen.org)
> > >> <tim@xxxxxxx>
> > >> Subject: Re: [PATCH v17 06/11] x86/hvm/ioreq: add a new mappable
> > >> resource type...
> > >>
> > >> >>> On 03.01.18 at 13:19, <paul.durrant@xxxxxxxxxx> wrote:
> > >> > +static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool
> > buf)
> > >> > +{
> > >> > +    struct domain *d = s->domain;
> > >> > +    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
> > >> > +
> > >> > +    if ( !iorp->page )
> > >> > +        return;
> > >> > +
> > >> > +    page_list_add_tail(iorp->page, &d-
> > >> >arch.hvm_domain.ioreq_server.pages);
> > >>
> > >> Afaict s->domain is the guest, not the domain containing the
> > >> emulator. Hence this new model of freeing the pages is safe only
> > >> when the emulator domain is dead by the time the guest is being
> > >> cleaned up.
> > >
> > > From the investigations done w.r.t. the grant table pages I don't think 
> > > this
> > > is the case. The emulating domain will have references on the pages and
> > this
> > > keeps the target domain in existence, only completing domain
> destruction
> > when
> > > the references are finally dropped. I've tested this by leaving an 
> > > emulator
> > > running whilst I 'xl destroy' the domain; the domain remains as a zombie
> > > until emulator terminates.
> >
> > Oh, right, I forgot about that aspect.
> >
> > >> What is additionally confusing me is the page ownership: Wasn't
> > >> the (original) intention to make the pages owned by the emulator
> > >> domain rather than the guest? I seem to recall you referring to
> > >> restrictions in do_mmu_update(), but a domain should always be
> > >> able to map pages it owns, shouldn't it?
> > >
> > > I'm sure we had this discussion before. I am trying to make resource
> > mapping
> > > as uniform as possible so, like the grant table pages, the ioreq server
> pages
> > > are assigned to the target domain. Otherwise the domain trying to map
> > > resources has know which actual domain they are assigned to, rather
> than
> > the
> > > domain they relate to... which is pretty ugly.
> >
> > Didn't I suggest a slight change to the interface to actually make
> > this not as ugly?
> 
> Yes, you did but I didn't really want to go that way unless I absolutely had 
> to.
> If you'd really prefer things that way then I'll re-work the hypercall to 
> allow
> the domain owning the resource pages to be passed back. Maybe it will
> ultimately end up neater.
> 
> >
> > >> Furthermore you continue to use Xen heap pages rather than
> > >> domain heap ones.
> > >
> > > Yes, this seems reasonable since Xen will always need mappings of the
> > pages
> > > and the aforementioned reference counting only works for Xen heap
> > pages AIUI.
> >
> > share_xen_page_with_guest() makes any page a Xen heap one.
> 
> Oh, that's somewhat counter-intuitive.
> 
> > See vmx_alloc_vlapic_mapping() for an example.
> >
> 
> Ok, thanks. If change back to having the pages owned by the tools domain
> then I guess this will all be avoided anyway.

I've run into a problem this this, but it may be easily soluable...

If I pass back the domid of the resource page owner and that owner is the tools 
domain, then when the tools domain attempts the mmu_update hypercall it fails 
because it has passed its own domid to mmu_update. The failure is caused by a 
check in get_pg_owner() which errors own if the passed in domid == 
curr->domain_id but, strangely, not if domid == DOMID_SELF. Any idea why this 
check is there? To me it looks like it should be safe to specify 
curr->domain_id and have get_pg_owner() simply behave as if DOMID_SELF was 
passed.

The alternative would be to have the acquire_resource hypercall do the check 
and pass back DOMID_SELF is the ioreq server dm domain happens to match 
currd->domain_id, but that seems a bit icky.

  Paul

> 
>   Paul
> 
> > Jan
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxxxxxxxxx
> > https://lists.xenproject.org/mailman/listinfo/xen-devel
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxxx
> https://lists.xenproject.org/mailman/listinfo/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.