[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC 06/19] xen/arm: Implement hypercall PHYSDEVOP_map_pirq
On 06/18/2014 08:24 PM, Stefano Stabellini wrote: >> /* >> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c >> index e451324..c18b2ca 100644 >> --- a/xen/arch/arm/vgic.c >> +++ b/xen/arch/arm/vgic.c >> @@ -82,10 +82,7 @@ int domain_vgic_init(struct domain *d) >> /* Currently nr_lines in vgic and gic doesn't have the same meanings >> * Here nr_lines = number of SPIs >> */ >> - if ( is_hardware_domain(d) ) >> - d->arch.vgic.nr_lines = gic_number_lines() - 32; >> - else >> - d->arch.vgic.nr_lines = 0; /* We don't need SPIs for the guest */ >> + d->arch.vgic.nr_lines = gic_number_lines() - 32; >> >> d->arch.vgic.shared_irqs = >> xzalloc_array(struct vgic_irq_rank, DOMAIN_NR_RANKS(d)); > > I see what you mean about virq != pirq. > > It seems to me that setting d->arch.vgic.nr_lines = gic_number_lines() - > 32 for the hardware domain is OK, but it is really a waste for the > others. We could find a way to pass down the info about how many SPIs we > need from libxl. Or we could delay the vgic allocations until the first > SPI is assigned to the domU. I gave a check on both midway and the versatile express and there is about 200 lines. It make the overhead of less than 8K per domain. Which is not too bad. If the host really support 1024 IRQs that would make an overhead of ~32K. > Similarly to the MMIO hole sizing, I don't think that it would be a > requirement for this patch series but it is something to keep in mind. Handling virq != pirq will be more complex as we need to take into account of the hotplug solution. The vgic has a register which provide the number of lines, I suspect this number can't grow up while the guest is running. Regards, -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |