[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 3/9] xen/mm: move modify_identity_mmio to global file and drop __init



On Tue, Apr 25, 2017 at 10:09:34AM +0100, Julien Grall wrote:
> On 25/04/2017 09:01, Roger Pau Monne wrote:
> > On Mon, Apr 24, 2017 at 03:42:08PM +0100, Julien Grall wrote:
> > > On 20/04/17 16:17, Roger Pau Monne wrote:
> > > > +                unsigned long nr_pages, const bool map)
> > > > +{
> > > > +    int rc;
> > > > +
> > > > +    /*
> > > > +     * Make sure this function is only used by the hardware domain, 
> > > > because it
> > > > +     * can take an arbitrary long time, and could DoS the whole system.
> > > > +     */
> > > > +    ASSERT(is_hardware_domain(d));
> > > 
> > > What would be the plan for guest if we decide to use vpci?
> > 
> > One option would be to not allow the DomU to relocate it's BARs and ignore
> > writes to the 2nd bit of the command register (PCI_COMMAND_MEMORY), thus 
> > always
> > having the BARs mapped. The other is to somehow allow VMExit (and the ARM
> > equivalent) continuation (something similar to what we do with hypercalls).
> 
> My understanding is BARs may be allocated by the kernel because the firmware
> didn't do it. This is the current case on ARM (and I guess x86) where Linux
> will always go through the BARs.

No, on x86 BARs are allocated by the firmware. Linux or whatever OS will scan
the BARs in order to get it's position/size, but will not try to move them
AFAIK.

> So if you do the first option, who would decide the position of the BARs?

On x86 that would be what the firmware has set.

> For the second option, we can take advantage of superpage (4K, 2M, 1G)
> mapping on ARM, so the number of actual mapping would be really limited.

IIRC x86 should also do MMIO mappings with superpages. Maybe we should time how
long this takes and then make a decision.

> Also, we are looking at MMIO continuation for ARM for other part of the
> hypervisor. We might be able to leverage that for this function.

Indeed

> > 
> > > > +
> > > > +    for ( ; ; )
> > > > +    {
> > > > +        rc = (map ? map_mmio_regions : unmap_mmio_regions)
> > > 
> > > On ARM, map_mmio_regions and unmap_mmio_regions will map the MMIO with 
> > > very
> > > strict attribute. I think we would need an extra argument to know the 
> > > wanted
> > > memory attribute (maybe p2m_type_t?).
> > 
> > I'm not sure I can do anything regarding this ATM. Sorry for my ignorance, 
> > but
> > map_mmio_regions on ARM maps the region as p2m_mmio_direct_dev, and 
> > according
> > to the comment that's "Read/write mapping of genuine Device MMIO area", 
> > which
> > fits exactly into my usage (I'm using it to map BARs).
> 
> We have few p2m_mmio_direct_* p2m_type because the architecture allows us to
> have fine grain memory attribute.
> 
> The p2m type p2m_mmio_direct_dev is very restrictive (unaligned access
> forbid, non-cacheable, non-gatherable). This should be used for MMIO region
> that have side-effects and will affect performances.
> 
> We use this one by default as it will restrict the memory attribute used by
> the guest. However, this will be an issue for at least cacheable BARs. We
> had similar issue recently on ARM with SRAM device as driver may do
> unaligned access and cacheable one.
> 
> For DOM0 we are using p2m_mmio_direct_c and rely on the OS to restrict the
> memory attribute when necessary. We cannot do that for guest as this may
> have some security implications.
> 
> So for the guest we will do on the case by case basis. For instance we you
> map BAR, you know the kind and can decide of a proper memory attribute.

Oh, OK. I guess you can add a new parameter to pass whether the BAR is
prefetchable or not.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.