[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 13/30] ARM: GICv3: forward pending LPIs to guests



On Thu, 6 Apr 2017, Julien Grall wrote:
> On 04/06/2017 07:47 PM, Stefano Stabellini wrote:
> > On Thu, 6 Apr 2017, Andre Przywara wrote:
> > > > >      unsigned long status;
> > > > >      struct irq_desc *desc; /* only set it the irq corresponds to a
> > > > > physical irq */
> > > > >      unsigned int irq;
> > > > >  #define GIC_INVALID_LR         (uint8_t)~0
> > > > >      uint8_t lr;
> > > > >      uint8_t priority;
> > > > > +    uint8_t lpi_priority;       /* Caches the priority if this is an
> > > > > LPI. */
> > > > 
> > > > The commit message says: "we enhance struct pending_irq to cache the
> > > > pending bit and the priority information for LPIs, as we can't afford to
> > > > walk the tables in guest memory every time we handle an incoming LPI." I
> > > > thought it would be direct access, having the vlpi number in our hands?
> > > > Why is it a problem?
> > > > 
> > > > If there has been a conversation about this that I am missing, please
> > > > provide a link, I'll go back and read it.
> > > 
> > > Well, the property table is in guest memory as the other ITS tables and
> > > we now access this in a new way (vgic_access_guest_memory()), which is
> > > quite costly: we need to do the p2m lookup, map the page, access the
> > > data, unmap the page and put the page back.
> > 
> > map, unmap and put are (almost) nop. The only operation is the
> > p2m_lookup that could be easily avoided by storing the struct page_info*
> > or mfn corresponding to the guest-provided data structures. We should be
> > able to do this in a very small number of steps, right?
> 
> The property table could be really big (up to the number of LPIs supported by
> the guest). The memory is contiguous from a domain point of view but not
> necessarily on the host. So you would have to store a big number of MFN. IIRC
> the property table can be up to 4GB, so we would need an array of 8MB.

Yes, but in that scenario, with so many LPIs, the new lpi_priority field
will use overall 4GB. In comparison 8MB sounds awesome. But I didn't
notice that it was using the padding space.


> Also reading from the guest would require some safety which is not necessary
> here.
> 
> Furthermore, we have space in pending_irq because of padding.
> 
> So why bothering doing that?

I didn't notice that it was taking over some of the padding. That is
much better. I can live with it then.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.