[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 12/12] x86/hvm/ioreq: add a new mappable resource type...



> -----Original Message-----
> From: Roger Pau Monne
> Sent: 04 September 2017 16:02
> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx; Stefano Stabellini
> <sstabellini@xxxxxxxxxx>; Wei Liu <wei.liu2@xxxxxxxxxx>; Andrew Cooper
> <Andrew.Cooper3@xxxxxxxxxx>; Ian Jackson <Ian.Jackson@xxxxxxxxxx>; Tim
> (Xen.org) <tim@xxxxxxx>; Jan Beulich <jbeulich@xxxxxxxx>
> Subject: Re: [Xen-devel] [PATCH v3 12/12] x86/hvm/ioreq: add a new
> mappable resource type...
> 
> On Thu, Aug 31, 2017 at 10:36:05AM +0100, Paul Durrant wrote:
> > ... XENMEM_resource_ioreq_server
> >
> > This patch adds support for a new resource type that can be mapped using
> > the XENMEM_acquire_resource memory op.
> >
> > If an emulator makes use of this resource type then, instead of mapping
> > gfns, the IOREQ server will allocate pages from the heap. These pages
> > will never be present in the P2M of the guest at any point and so are
> > not vulnerable to any direct attack by the guest. They are only ever
> > accessible by Xen and any domain that has mapping privilege over the
> > guest (which may or may not be limited to the domain running the
> emulator).
> >
> > NOTE: Use of the new resource type is not compatible with use of
> >       XEN_DMOP_get_ioreq_server_info unless the XEN_DMOP_no_gfns
> flag is
> >       set.
> >
> > Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
> > Acked-by: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
> > ---
> > Cc: Jan Beulich <jbeulich@xxxxxxxx>
> > Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> > Cc: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
> > Cc: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
> > Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>
> > Cc: Tim Deegan <tim@xxxxxxx>
> > Cc: Wei Liu <wei.liu2@xxxxxxxxxx>
> > ---
> >  xen/arch/x86/hvm/ioreq.c        | 126
> +++++++++++++++++++++++++++++++++++++++-
> >  xen/arch/x86/mm.c               |  27 +++++++++
> >  xen/include/asm-x86/hvm/ioreq.h |   2 +
> >  xen/include/public/hvm/dm_op.h  |   4 ++
> >  xen/include/public/memory.h     |   3 +
> >  5 files changed, 161 insertions(+), 1 deletion(-)
> >
> > diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
> > index 2d98b43849..5d406bc1fb 100644
> > --- a/xen/arch/x86/hvm/ioreq.c
> > +++ b/xen/arch/x86/hvm/ioreq.c
> > @@ -241,6 +241,15 @@ static int hvm_map_ioreq_gfn(struct
> hvm_ioreq_server *s, bool buf)
> >      struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
> >      int rc;
> >
> > +    if ( iorp->page )
> > +    {
> > +        /* Make sure the page has not been allocated */
> > +        if ( gfn_eq(iorp->gfn, INVALID_GFN) )
> > +            return -EPERM;
> > +
> > +        return 0;
> > +    }
> > +
> >      if ( d->is_dying )
> >          return -EINVAL;
> >
> > @@ -263,6 +272,60 @@ static int hvm_map_ioreq_gfn(struct
> hvm_ioreq_server *s, bool buf)
> >      return rc;
> >  }
> >
> > +static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
> > +{
> > +    struct domain *currd = current->domain;
> > +    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
> > +
> > +    if ( iorp->page )
> > +    {
> > +        /* Make sure the page has not been mapped */
> > +        if ( !gfn_eq(iorp->gfn, INVALID_GFN) )
> > +            return -EPERM;
> > +
> > +        return 0;
> > +    }
> > +
> > +    /*
> > +     * Allocated IOREQ server pages are assigned to the emulating
> > +     * domain, not the target domain. This is because the emulator is
> > +     * likely to be destroyed after the target domain has been torn
> > +     * down, and we must use MEMF_no_refcount otherwise page
> allocation
> > +     * could fail if the emulating domain has already reached its
> > +     * maximum allocation.
> > +     */
> > +    iorp->page = alloc_domheap_page(currd, MEMF_no_refcount);
> 
> So AFAICT (correct me if I'm wrong), the number of pages that can be
> allocated here is limited by MAX_NR_IOREQ_SERVERS, each ioreq server
> can only have at most one page.
> 

Each server can have at most 2 pages (one for synchronous ioreqs and one for 
buffered).

> > +    if ( !iorp->page )
> > +        return -ENOMEM;
> > +
> > +    get_page(iorp->page, currd);
> 
> Hm, didn't we agree that this get_page was not needed? AFAICT you need
> this if you use MEMF_no_owner, because the page is not added to
> d->page_list.
> 

Oh, good catch. I completely forgot to ditch that.

  Paul

> Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.