[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/3] xen/arm: vgic-v3: Don't create empty re-distributor regions



On Fri, 28 Sep 2018, Julien Grall wrote:
> On 09/28/2018 12:34 AM, Stefano Stabellini wrote:
> > On Wed, 26 Sep 2018, Julien Grall wrote:
> > > Hi Stefano,
> > > 
> > > On 09/25/2018 09:38 PM, Stefano Stabellini wrote:
> > > > On Tue, 4 Sep 2018, Julien Grall wrote:
> > > > > At the moment, Xen is assuming the hardware domain will have the same
> > > > > number of re-distributor regions as the host. However, as the
> > > > > number of CPUs or the stride (e.g on GICv4) may be different we end up
> > > > > exposing regions which does not contain any re-distributors.
> > > > > 
> > > > > When booting, Linux will go through all the re-distributor region to
> > > > > check whether a property (e.g vPLIs) is available accross all the
> > > > > re-distributors. This will result to a data abort on empty regions
> > > > > because there are no underlying re-distributor.
> > > > > 
> > > > > So we need to limit the number of regions exposed to the hardware
> > > > > domain. The code reworked to only expose the minimun number of regions
> > > > > required by the hardware domain. It is assumed the regions will be
> > > > > populated starting from the first one.
> > > > 
> > > > I have a question: given that it is possible for a rdist_region to cover
> > > > more than 1 cpu, could we get into troubles if the last rdist_region of
> > > > the hardware_domain covers 2 cpus, but actually dom0 only has 1 vcpu?
> > > > get_vcpu_from_rdist would return NULL and vgic_v3_rdistr_mmio_read/write
> > > > would return 0.
> > > > This case seems to be handled correctly but I wanted to
> > > > double check.
> > > 
> > > 0 means a data abort will be injected into the guest. However, the guest
> > > should never touch that because the last valid re-distributor of the
> > > regions
> > > will have GICR_TYPER.Last set.
> > > 
> > > So the guest OS will stop looking for more re-distributors in that region.
> > 
> > OK
> > 
> > 
> > > >   >
> > > > I think we also need to fix vgic_v3_rdist_count? Today it just returns
> > > > vgic_v3_hw.nr_rdist_regions for dom0. It would be bad if we left it
> > > > unfixed? If we do that, we might be able to get rid of the modifications
> > > > to vgic_v3_real_domain_init.
> > > 
> > > We don't want to fix vgic_v3_rdist_count. The helper returns the maximum
> > > re-distributors regions.
> > 
> > We don't want to or we can't? Because it looks like we would want to fix
> > vgic_v3_rdist_count if we could, right? It is called from domain
> > specific initialization functions, so theoretically it should return
> > domain specific vgic information, not physical information.
> 
> We don't want to fix in the current design.
> 
> > 
> > 
> > > This is used to compute the number of IO handlers and
> > > allocating the array storing the regions.
> > > 
> > > I am pretty sure you will say we will waste memory. However, at the
> > > moment,
> > > we need to know the number of IO handlers much before we know the number
> > > of
> > > vCPUs. For the array, we would need to go through the regions twice
> > > (regions
> > > may not be the same size so we can't infer easily the number needed).
> > > Overall,
> > > the amount of memory used is the same as today (so not really a waste
> > > per-se).
> > > 
> > > It might be possible to limit that once we reworked the common code to
> > > know
> > > the number of vCPUs earlier on (see discussion on patch #1).
> > 
> > Yeah, this is nasty, but it is clear that until we rework the code to
> > set d->max_vcpus earlier it won't get fixed. Nothing we can do here.
> > 
> > So, I think ideally we would want to fix vgic_v3_rdist_count, but today
> > we can't. Maybe we could rename vgic_v3_rdist_count to
> > vgic_v3_hw_rdist_count, and add a short TODO comment somewhere in the
> > file?
> > 
> 
> Which would be wrong as the function don't always return the number of rdist
> for the HW.
> 
> A better name would be vgic_v3_max_rdist_count(struct domain *d).

I am OK with that

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.