[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Routing physical interrupts to EL1


On 07/07/2018 08:32 PM, Saeed Mirzamohammadi wrote:
Thanks for your detailed reply.

On Fri, Jul 6, 2018 at 6:13 AM, Julien Grall <julien.grall@xxxxxxx <mailto:julien.grall@xxxxxxx>> wrote:

    On 06/07/18 04:51, Saeed Mirzamohammadi wrote:



        I'm trying to route all the physical interrupts to the guest
        domain rather than being trapped in the Xen. I would like to
        know what is the right way to do that?

    May I ask what is your use case for that? If you route interrupts to
    the guest, Xen will not receive vital interrupt such as the timer,
    UART, SMMU interrupts, maintenance interrupt....

I only have one guest domain. So, I'm trying to make Xen transparent to avoid any extra overhead caused by trapping interrupts.

Do you include Dom0 in your "one guest domain"? If so, may I ask what is your end goal? Why not booting the OS on baremetal?

But I still need Xen for my own hypercalls. I don't need the timer cause I pinned and don't need any vcpu scheduler.

Well, Xen still needs interrupts for other things like UART and SMMU. It also needs interrupts to IPI other pCPU such as for softirq, unblocking another vCPU (waiting on an event for instance)... I don't think you can discard interrupts that easily in Xen without some cooperation with the guest.

Let's imagine Xen IPIs another pCPU. If interrupts are routed to your guest, this guest will receive the IPIs and will not understand what to do.

Based on my understanding, I can only disable the interrupts on ARM all together using the HCR_EL2 register and we can't pick one interrupt to not trap, right?

Depends on your interrupts controller. On GICv4, you will be able to directly injected some LPIs (i.e MSI).

        I know that HCR_IMO bit in the HCR_EL2 register is supposed to
        be for routing the interrupts to the guest (Routing to EL1
        instead of EL2).
        link to the datasheet:

        So, I have tried doing the following in
        the leave_hypervisor_tail. I run a simple hypercall and do the
        following lines before return (which is I guess the last point
        of exit to the guest from hypervisor):
        /current->arch.hcr_el2 &= ~HCR_IMO;/
        /WRITE_SYSREG(current->arch.hcr_el2, HCR_EL2);/
        /It looks like to be doing it right for all
        thevcpus butgets stuck after return from leave_hypervisor_tail
        for the lastvcpu.

    What do you mean by stuck? Do you see any logs?

It's hung with no log.

    HCR_EL2.IMO unset means the interrupt will get signaled to EL1. It
    does not affect how interrupt will get read (e.g IAR).

    Which interrupt controller are you using?

I'm using a GICv2.

    In case of GICv2, Xen is re-mapping GICC to GICV. So when the guest
    is reading IAR, it will read the interrupts from the LRs. Not the
    physical interface.

 So, in the case of GICv2, we can't route them cause Xen is the one that is updating the LRs and guest is reading from the LRs, am I right?

If you want to route *all* the interrupts, you can map GICC and not GICV to your guest. So when the guest will read IAR, it will read the physical interrupts.

    In case of GICv3, HCR_EL2.IMO will also control the access. So you
    should be fine here.

    However, in both case you will at least need to rework the way
    software generated interrupts are sent to the guest. At the moment,
    they are written in the LRs.

Is it possible to not trap on the ICDSGIR (SGI register)?

SGIs register are already trapped by Xen. They are emulated by writing the corresponding interrupt to the LRs.

However, SGIs are not the only interrupt generated by the hypervisor directly. There are also the event channel (a PPI) or any device emulated by the hypervisor (e.g PL011).

Trying to remove interrupts from the hypervisor is more a pain compare to the benefits you will gain. You will be better at improving the latency when receiving interrupt (AFAIK this is already be good).


Julien Grall

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.