[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 6/7] x86: add iommu_op to query reserved ranges



> From: Paul Durrant [mailto:Paul.Durrant@xxxxxxxxxx]
> Sent: Tuesday, February 13, 2018 5:25 PM
> 
> > -----Original Message-----
> > From: Tian, Kevin [mailto:kevin.tian@xxxxxxxxx]
> > Sent: 13 February 2018 06:52
> > To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>; xen-
> devel@xxxxxxxxxxxxxxxxxxxx
> > Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>; Wei Liu
> > <wei.liu2@xxxxxxxxxx>; George Dunlap <George.Dunlap@xxxxxxxxxx>;
> > Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Ian Jackson
> > <Ian.Jackson@xxxxxxxxxx>; Tim (Xen.org) <tim@xxxxxxx>; Jan Beulich
> > <jbeulich@xxxxxxxx>
> > Subject: RE: [Xen-devel] [PATCH 6/7] x86: add iommu_op to query
> reserved
> > ranges
> >
> > > From: Paul Durrant
> > > Sent: Monday, February 12, 2018 6:47 PM
> > >
> > > Certain areas of memory, such as RMRRs, must be mapped 1:1
> > > (i.e. BFN == MFN) through the IOMMU.
> > >
> > > This patch adds an iommu_op to allow these ranges to be queried.
> > >
> > > Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
> > > ---
> > > Cc: Jan Beulich <jbeulich@xxxxxxxx>
> > > Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> > > Cc: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
> > > Cc: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
> > > Cc: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
> > > Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>
> > > Cc: Tim Deegan <tim@xxxxxxx>
> > > Cc: Wei Liu <wei.liu2@xxxxxxxxxx>
> > > ---
> > >  xen/arch/x86/iommu_op.c       | 121
> > > ++++++++++++++++++++++++++++++++++++++++++
> > >  xen/include/public/iommu_op.h |  35 ++++++++++++
> > >  xen/include/xlat.lst          |   2 +
> > >  3 files changed, 158 insertions(+)
> > >
> > > diff --git a/xen/arch/x86/iommu_op.c b/xen/arch/x86/iommu_op.c
> > > index edd8a384b3..ac81b98b7a 100644
> > > --- a/xen/arch/x86/iommu_op.c
> > > +++ b/xen/arch/x86/iommu_op.c
> > > @@ -22,6 +22,58 @@
> > >  #include <xen/event.h>
> > >  #include <xen/guest_access.h>
> > >  #include <xen/hypercall.h>
> > > +#include <xen/iommu.h>
> > > +
> > > +struct get_rdm_ctxt {
> > > +    unsigned int max_entries;
> > > +    unsigned int nr_entries;
> > > +    XEN_GUEST_HANDLE(xen_iommu_reserved_region_t) regions;
> > > +};
> > > +
> > > +static int get_rdm(xen_pfn_t start, xen_ulong_t nr, u32 id, void *arg)
> > > +{
> > > +    struct get_rdm_ctxt *ctxt = arg;
> > > +
> > > +    if ( ctxt->nr_entries < ctxt->max_entries )
> > > +    {
> > > +        xen_iommu_reserved_region_t region = {
> > > +            .start_bfn = start,
> > > +            .nr_frames = nr,
> > > +        };
> > > +
> > > +        if ( copy_to_guest_offset(ctxt->regions, ctxt->nr_entries, 
> > > &region,
> > > +                                  1) )
> > > +            return -EFAULT;
> >
> > RMRR entries are device specific. it's why a 'id' (i.e. sbdf) field
> > is introduced for such check.
> 
> What I want here is the union of all RMRRs for all devices in the domain. I
> believe that is what the code will currently query, but I could be wrong.

RMRR is per-device. I'm not sure why we want to restrict it for every
device if not related.

> 
> >
> > > +    }
> > > +
> > > +    ctxt->nr_entries++;
> > > +
> > > +    return 1;
> > > +}
> > > +
> > > +static int iommuop_query_reserved(struct
> > > xen_iommu_op_query_reserved *op)
> >
> > I didn't get why we cannot reuse existing XENMEM_reserved_
> > device_memory_map?
> >
> 
> This hypercall is not intended to be tools-only. That one is, unless I misread
> the #ifdefs.
> 

I didn't realize it. Curious how Xen enforces such tools-only policy? What
would happen if calling it from Dom0 kernel? I just felt not good of
creating a new interface just for duplicated purpose...

Thanks
Kevin
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.