[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Multiple issues with event channel on Xen on ARM



On 05/02/14 10:45, David Vrabel wrote:
> On 04/02/14 23:18, Julien Grall wrote:
>> Hello David,
>>
>> I'm currently trying to use Linux 3.14-rc1 as Linux guest on Xen on ARM (Xen 
>> 4.4-rc3).
>>
>> I have multiple issues with your event channel patch series on Linux and Xen 
>> side.
>> I tried to use Linux 3.14-rc1 as dom0 but it was worst (unable to create 
>> guests).
> 
> I think there must be two issues here as both 2-level and FIFO events
> are broken.
> 
>> I'm using a simple guest config:
>> kernel="/root/zImage"
>> memory=32
>> name="test"
>> vcpus=1
>> autoballon="off"
>> extra="console=hvc0"
>>
>> If everything is ok, I should see that Linux is unable to find the root 
>> filesystem.
>> But here, Linux is stucked.
>>
>> >From Linux side, after bisecting, I found that the offending commit is:
>>     xen/events: remove unnecessary init_evtchn_cpu_bindings()
>>     
>>     Because the guest-side binding of an event to a VCPU (i.e., setting
>>     the local per-cpu masks) is always explicitly done after an event
>>     channel is bound to a port, there is no need to initialize all
>>     possible events as bound to VCPU 0 at start of day or after a resume.
>>     
>>     Signed-off-by: David Vrabel <david.vrabel@xxxxxxxxxx>
>>     Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
>>     Reviewed-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
>>
>> With this patch, the function __xen_evtchn_do_upcall won't be able
>> to find an events (pendings_bits == 0 every time).
>> It seems the second part of init_evtchn_cpu_bindings is necessary on ARM.
> 
> I think this is because binding an interdomain or allocating an unbound
> event channel does call bind_evtchn_to_cpu(evtchn, 0) which is required
> to set the local VCPU masks.
> 
> I think this happened to work on x86 because during the generic irq
> setup, the irq affinity is always set which then binds the event channel
> to the right VCPU.  I guess ARM's irq setup misses this step.
> 
> This shouldn't affect the FIFO-based events though since
> evtchn_fifo_bind_to_cpu() is a no-op.

I think the following patch should fix the 2-level problems.

You can force the use of 2-level events by using the xen.fifo_events=0
Linux command line option.

8<-------------------------------------------------
xen/events: bind all new interdomain events to VCPU0

From: David Vrabel <david.vrabel@xxxxxxxxxx>

Commit fc087e10734a4d3e40693fc099461ec1270b3fff (xen/events: remove
unnecessary init_evtchn_cpu_bindings()) causes a regression.

The kernel-side VCPU binding was not being correctly set for newly
allocated or bound interdomain events.  In ARM guests where 2-level
events were used, this would result in no interdomain events being
handled because the local VCPU masks would all be clear.

x86 guests would work because the irq affinity was set during irq
setup and this would set the correct kernel-side VCPU binding.

Fix this by by properly initializing the kernel-side VCPU binding in
bind_evtchn_to_irq().

Reported-by: Julian Grall <julien.grall@xxxxxxxxxx>
Signed-off-by: David Vrabel <david.vrabel@xxxxxxxxxx>
---
 drivers/xen/events/events_base.c |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index 4672e00..5cc1f78 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -862,6 +862,9 @@ int bind_evtchn_to_irq(unsigned int evtchn)
                        irq = ret;
                        goto out;
                }
+
+               /* Newly bound event channels start off on VCPU0. */
+               bind_evtchn_to_cpu(evtchn, 0);
        } else {
                struct irq_info *info = info_for_irq(irq);
                WARN_ON(info == NULL || info->type != IRQT_EVTCHN);
-- 
1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.