[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] IRQ SMP affinity problems in domU with vcpus > 4 on HP ProLiant G6 with dual Xeon 5540 (Nehalem)



Keir Fraser wrote:
> On 22/10/2009 09:41, "Zhang, Xiantao" <xiantao.zhang@xxxxxxxxx> wrote:
> 
>>> Hmm, then I don't understand which case your patch was a fix for: I
>>> understood that it addresses an issue when the affinity of an
>>> interrupt gets changed (requiring a re-write of the address/data
>>> pair). If the hypervisor can deal with it without masking, then why
>>> did you add it?
>> 
>> Hmm, sorry, seems I misunderstood your question. If the msi doesn't
>> support mask bit(clearing MSI enable bit doesn't help in this case),
>> the issue may still exist. Just checked Linux side, seems it doesn't
>> perform mask operation when program MSI, but don't know why Linux
>> hasn't such issues.  Actaully, we do see inconsisten interrupt
>> message from the device without this patch, and after applying the
>> patch, the issue is gone.  May need further investigation why Linux
>> doesn't need the mask operation. 
> 
> Linux is quite careful about when it will reprogram vector/affinity
> info isn't it? Doesn't it mark such an update pending and only flush
> it through during next interrupt delivery, or something like that? Do
> we need some of the upstream Linux patches for this?
Yeah, after checking the related logic in Linux, I think we need to port more 
logic to support IRQ migration to avoid the reported races in this thread.   
For setting affinity for specific irq, the first step is to mark it pending, 
and then do real setting before acking the irq for next interrupt delivery, so 
at this time there shouldn't be new interrupts generated for normal devcies 
before acking it.  I will post the backport patch later. 
Xiantao
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.