[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 5/7] public / x86: introduce __HYPERCALL_iommu_op



> From: Paul Durrant [mailto:Paul.Durrant@xxxxxxxxxx]
> Sent: Monday, February 26, 2018 5:57 PM
> 
> > -----Original Message-----
> > From: Tian, Kevin [mailto:kevin.tian@xxxxxxxxx]
> > Sent: 24 February 2018 02:57
> > To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>; xen-
> devel@xxxxxxxxxxxxxxxxxxxx
> > Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>; Wei Liu
> > <wei.liu2@xxxxxxxxxx>; George Dunlap <George.Dunlap@xxxxxxxxxx>;
> > Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Ian Jackson
> > <Ian.Jackson@xxxxxxxxxx>; Tim (Xen.org) <tim@xxxxxxx>; Jan Beulich
> > <jbeulich@xxxxxxxx>; Daniel De Graaf <dgdegra@xxxxxxxxxxxxx>
> > Subject: RE: [Xen-devel] [PATCH 5/7] public / x86: introduce
> > __HYPERCALL_iommu_op
> >
> > > From: Paul Durrant [mailto:Paul.Durrant@xxxxxxxxxx]
> > > Sent: Friday, February 23, 2018 5:41 PM
> > >
> > > > -----Original Message-----
> > > > From: Tian, Kevin [mailto:kevin.tian@xxxxxxxxx]
> > > > Sent: 23 February 2018 05:17
> > > > To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>; xen-
> > > devel@xxxxxxxxxxxxxxxxxxxx
> > > > Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>; Wei Liu
> > > > <wei.liu2@xxxxxxxxxx>; George Dunlap <George.Dunlap@xxxxxxxxxx>;
> > > > Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Ian Jackson
> > > > <Ian.Jackson@xxxxxxxxxx>; Tim (Xen.org) <tim@xxxxxxx>; Jan Beulich
> > > > <jbeulich@xxxxxxxx>; Daniel De Graaf <dgdegra@xxxxxxxxxxxxx>
> > > > Subject: RE: [Xen-devel] [PATCH 5/7] public / x86: introduce
> > > > __HYPERCALL_iommu_op
> > > >
> > > > > From: Paul Durrant [mailto:Paul.Durrant@xxxxxxxxxx]
> > > > > Sent: Tuesday, February 13, 2018 5:23 PM
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: Tian, Kevin [mailto:kevin.tian@xxxxxxxxx]
> > > > > > Sent: 13 February 2018 06:43
> > > > > > To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>; xen-
> > > > > devel@xxxxxxxxxxxxxxxxxxxx
> > > > > > Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>; Wei Liu
> > > > > > <wei.liu2@xxxxxxxxxx>; George Dunlap
> <George.Dunlap@xxxxxxxxxx>;
> > > > > > Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Ian Jackson
> > > > > > <Ian.Jackson@xxxxxxxxxx>; Tim (Xen.org) <tim@xxxxxxx>; Jan
> Beulich
> > > > > > <jbeulich@xxxxxxxx>; Daniel De Graaf <dgdegra@xxxxxxxxxxxxx>
> > > > > > Subject: RE: [Xen-devel] [PATCH 5/7] public / x86: introduce
> > > > > > __HYPERCALL_iommu_op
> > > > > >
> > > > > > > From: Paul Durrant
> > > > > > > Sent: Monday, February 12, 2018 6:47 PM
> > > > > > >
> > > > > > > This patch introduces the boilerplate for a new hypercall to allow
> a
> > > > > > > domain to control IOMMU mappings for its own pages.
> > > > > > > Whilst there is duplication of code between the native and
> compat
> > > > > entry
> > > > > > > points which appears ripe for some form of combination, I think
> it is
> > > > > > > better to maintain the separation as-is because the compat entry
> > > point
> > > > > > > will necessarily gain complexity in subsequent patches.
> > > > > > >
> > > > > > > NOTE: This hypercall is only implemented for x86 and is currently
> > > > > > >       restricted by XSM to dom0 since it could be used to cause
> > > IOMMU
> > > > > > >       faults which may bring down a host.
> > > > > > >
> > > > > > > Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
> > > > > > [...]
> > > > > > > +
> > > > > > > +
> > > > > > > +static bool can_control_iommu(void)
> > > > > > > +{
> > > > > > > +    struct domain *currd = current->domain;
> > > > > > > +
> > > > > > > +    /*
> > > > > > > +     * IOMMU mappings cannot be manipulated if:
> > > > > > > +     * - the IOMMU is not enabled or,
> > > > > > > +     * - the IOMMU is passed through or,
> > > > > > > +     * - shared EPT configured or,
> > > > > > > +     * - Xen is maintaining an identity map.
> > > > > >
> > > > > > "for dom0"
> > > > > >
> > > > > > > +     */
> > > > > > > +    if ( !iommu_enabled || iommu_passthrough ||
> > > > > > > +         iommu_use_hap_pt(currd) || need_iommu(currd) )
> > > > > >
> > > > > > I guess it's clearer to directly check iommu_dom0_strict here
> > > > >
> > > > > Well, the problem with that is that it totally ties this interface to
> dom0.
> > > > > Whilst, in practice, that is the case at the moment (because of the
> xsm
> > > > > check) I do want to leave the potential to allow other PV domains to
> > > control
> > > > > their IOMMU mappings, if that make sense in future.
> > > > >
> > > >
> > > > first it's inconsistent from the comments - "Xen is maintaining
> > > > an identity map" which only applies to dom0.
> > >
> > > That's not true. If I assign a PCI device to an HVM domain, for instance,
> > > then need_iommu() is true for that domain and indeed Xen maintains a
> 1:1
> > > BFN:GFN map for that domain.
> > >
> > > >
> > > > second I'm afraid !need_iommu is not an accurate condition to
> represent
> > > > PV domain. what about iommu also enabled for future PV domains?
> > > >
> > >
> > > I don't quite follow... need_iommu is a per-domain flag, set for dom0
> when
> > > in strict mode, set for others when passing through a device. Either way,
> if
> > > Xen is maintaining the IOMMU pagetables then it is clearly unsafe for
> the
> > > domain to also be messing with them.
> > >
> >
> > I don't think it's a mess. Xen always maintains the IOMMU pagetables
> > in a way that guest expects:
> >
> 
> I'll define some terms to try to avoid confusing...
> 
> - where the IOMMU code in Xen maintains a map such that BFN == MFN,
> let’s call this an 'identitity MFN map'
> - where the IOMMU code in Xen *initially programmes* the IOMMU with
> an identity MFN map for the whole host, let's call this a 'host map'
> - where the IOMMU code in Xen maintains a map such that BFN == GFN,
> let's call this an 'identity GFN map'
> - where the IOMMU code in Xen *initially programmes* the IOMMU with
> an identity GFN map for the guest, let's call this a 'guest map'

Can you introduce a name for such mapping? then when you describe
identity mapping in future version, people can immediately get the
actual meaning. At least to me I always think about the mapping on 
actual IOMMU page table first, which is always about BFN->MFN 
mapping (where the definition of BFN varies in different usages). 

> 
> > 1) for dom0 (w/o pvIOMMU) in strict mode, it's MFN:MFN identity
> mapping
> 
> Without strict mode, a host map is set up for dom0, otherwise it is an
> identity MFN map. In both cases the xen-swiotlb driver is use in Linux as
> there is no difference from its point of view.
> 
> > 2) for dom0 (w/ pvIOMMU), it's BFN:MFN mapping
> 
> With PV-IOMMU there is also a host map but since a host map is only
> initialized and not maintained (i.e. nothing happens when pages are
> removed from or added to dom0) then it is safe for dom0 to control the
> IOMMU mappings as it will not conflict with anything Xen is doing.

what do you mean by not maintained? host map will be programmed
to IOMMU page table before launching Dom0, since hypervisor doesn't
know whether there will be a pvIOMMU driver launched. Later 
pvIOMMU driver is loaded and issues hypercall to control its own
mapping, hypervisor then switch IOMMU page table from host map
to the new one, which is the same logic regarding to virtual VTd for
HVM guest. that is how I call an address space switch.

> 
> > 3) for HVM (w/o virtual VTd) with passthrough device, it's GFN:MFN
> 
> I have not been following virtual VTd closely but, yes, as it stands *when
> h/w is passed through* the guest gets an identity GFN map otherwise it
> gets no map at all.
> 
> > 4) for HVM (w/ virtual VTd) with passthrough device, it's BFN:MFN
> >
> 
> With virtual VTd I'd expect there would be a guest map and then the guest
> would get the same level of control over the IOMMU that PV-IOMMU
> allows for a PV domain but, of course, such control is as-yet unsafe for
> guests since an IOMMU fault can cause a host crash.

I'm not sure why you call it unsafe. even today with any passthrough
device (w/o virtual VTd exposed), a bad guest driver can always cause
DMA access to invalid GPA address and thus cause IOMMU fault. adding
virtual VTd doesn't change any security aspect here.

> 
> > (from IOMMU p.o.v we can always call all 4 categories as BFN:MFN.
> > I deliberately separate them from usage p.o.v, where 'BFN'
> > represents the cases where guest explicitly manages a new address
> > space - different from physical address space in its mind)
> >
> > there is an address space switch in 2) and 4) before and after
> > enabling vIOMMU.
> 
> Is there? The initial mapping in 2 is the same as 1, and the initial mapping 
> in
> 4 is the same as 3.
> 
> >
> > above is why I didn’t follow the assumption that "Xen is maintaining
> > an identity map" is identical to need_iommu.
> >
> 
> The crucial point is that in cases 2 and 4 Xen is not *maintaining* any map
> so need_iommu(d) should be false and hence the domain can control its
> own mappings without interfering which what Xen is doing internally.
> 
> Does that help clarify?
> 

again, above description is really confusing as you don't specify
which mapping is referred to here.

Thanks
Kevin
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.