[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 12/15] x86: add iommu_op to enable modification of IOMMU mappings



> From: Paul Durrant
> Sent: Tuesday, August 7, 2018 4:44 PM
> 
> > -----Original Message-----
> > From: Tian, Kevin [mailto:kevin.tian@xxxxxxxxx]
> > Sent: 07 August 2018 09:38
> > To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>; xen-
> devel@xxxxxxxxxxxxxxxxxxxx
> > Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>; Wei Liu
> > <wei.liu2@xxxxxxxxxx>; George Dunlap <George.Dunlap@xxxxxxxxxx>;
> > Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Ian Jackson
> > <Ian.Jackson@xxxxxxxxxx>; Tim (Xen.org) <tim@xxxxxxx>; Julien Grall
> > <julien.grall@xxxxxxx>; Jan Beulich <jbeulich@xxxxxxxx>
> > Subject: RE: [Xen-devel] [PATCH v5 12/15] x86: add iommu_op to enable
> > modification of IOMMU mappings
> >
> > > From: Paul Durrant [mailto:Paul.Durrant@xxxxxxxxxx]
> > > Sent: Tuesday, August 7, 2018 4:33 PM
> > >
> > > >
> > > > > From: Paul Durrant
> > > > > Sent: Saturday, August 4, 2018 1:22 AM
> > > > >
> > > > > This patch adds an iommu_op which checks whether it is possible or
> > > > > safe for a domain to modify its own IOMMU mappings and, if so,
> > > creates
> > > > > a rangeset to track modifications.
> > > >
> > > > Have to say that there might be a concept mismatch between us,
> > > > so I will stop review here until we get aligned on the basic
> > > > understanding.
> > > >
> > > > What an IOMMU does is to provide DMA isolation between devices.
> > > > Each device can be hooked with a different translation structure
> > > > (representing a different bfn address space). Linux kernel uses this
> > > > mechanism to harden kernel drivers (through dma APIs). Multiple
> > > > devices can be also attached to the same address space, used by
> > > > hypervisor when devices are assigned to the same VM.
> > > >
> > >
> > > Indeed.
> > >
> > > > Now with pvIOMMU exposed to dom0, , dom0 could use it to harden
> > > > kernel drivers too. Then there will be multiple bfn address spaces:
> > > >
> > > > - A default bfn address space created by Xen, where bfn = pfn
> > > > - multiple per-bdf bfn address spaces created by Dom0, where
> > > > bfn is completely irrelevant to pfn.
> > > >
> > > > the default space should not be changed by Dom0. It is attached
> > > > to devices which dom0 doesn't enable pviommu mapping.
> > >
> > > No that's not the point here. I'm not trying to re-architect Xen's IOMMU
> > > handling. All the IOMMU code in Xen AFAICT is built around the
> assumption
> > > there is one set of page tables per-VM and all devices assigned to the
> VM
> > > get the same page tables. I suspect trying to change that will be a huge
> can
> > > of worms and I have no need to go there for my purposes.
> >
> > don't just think from Xen side. think about what Dom0 feels about
> > this IOMMU.
> >
> > ideally pviommu driver is a new vendor driver attached to iommu
> > core within dom0. it needs to provide iommu dma ops to support
> > dma_alloc/map operations from different device drivers. iommu
> > core maintains a separate iova space for each device, so device
> > drivers can be isolated from each other.
> 
> But there is nothing that means that the IOVA space cannot be global, and
> that is good enough for a PV dom0.

You are right! Although my concept is all built around physical IOMMU
capability, I did a check that Linux doesn't state that IOVA cannot be
global. It's purely vendor IOMMU driver to decide.

So here current version pvIOMMU only provides global address space,
though unlike any existing IOMMU. maybe we should explicitly call out
this fact in some capability field for future extension.

> 
> >
> > Now dom0 got only one global space. then why does dom0 need
> > to enable pviommu at all?
> 
> As I explained in another reply, it is primarily to allow a PV dom0 to have a
> BFN:GFN map. Since a PV domain maintains its own P2M then it is the
> domain that maintains the mapping. That is all I need to do.

yes, I got this one.

Thanks
Kevin
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.