[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Race in vlapic init_sipi tasklet


  • To: George Dunlap <dunlapg@xxxxxxxxx>
  • From: Keir Fraser <keir@xxxxxxx>
  • Date: Mon, 18 Oct 2010 18:26:19 +0100
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, Tim Deegan <Tim.Deegan@xxxxxxxxxx>
  • Delivery-date: Mon, 18 Oct 2010 10:28:05 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic :thread-index:in-reply-to:mime-version:content-type :content-transfer-encoding; b=JVeKZrs36I7t53r3fVFNZpC0fMrdY+1Tmjk+ujFigVUnhBiKlyLTqHBd0GT+BEU9Iv ysJ5gbbAB2mHmN1IYYr/kz9VBemTPqwIMtyVZYoGUlxdUlNA389LD7tEikXJ2UOgZWTT QTETsAaD+VtIb4IO42NXYDUs7eS7NpDiclc7Y=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: Actu6ZNsnQ8NMqCqZUGL14pI5Li+IQ==
  • Thread-topic: [Xen-devel] Race in vlapic init_sipi tasklet

Good race! I'll work out a fix and let you know.

 K.

On 18/10/2010 18:16, "George Dunlap" <dunlapg@xxxxxxxxx> wrote:

> I've been tracking down a bug where a multi-vcpu VM hangs in the
> hvmloader on credit2, but not on credit1.  It hangs while trying to
> bring up extra cpus.
> 
> It turns out that an unintended quirk in credit2 (some might call it a
> bug) causes a scheduling order which exposes a race in the vlapic
> init_sipi tasklet handling code.
> 
> The code as it stands right now, is meant to do this:
> * v0 does an APIC ICR write with APIC_DM_STARTUP, trapping to Xen.
> * vlapic code checks to see that v1 is down (vlapic.c:318); finds that
> it is down, and schedules the tasklet, returning X86_EMUL_RETRY
> (vlapic.c:270)
> * Taslket runs, brings up v1.
> * v1 starts running.
> * v0 re-executes the instruction, finds that v1 is up, and returns
> X86_EMUL_OK, allowing the instruction to move forward.
> * v1 does some diagnostics, and takes itself offline.
> 
> Unfortunately, the credit2 scheduler almost always preempts v0
> immediately, allowing v1 to run to completion and bring itself back
> offline again, before v0 can re-try the MMIO.  It looks like this:
> * v0 does APIC ICR APIC_DM_STARTUP write, trapping to Xen.
> * vlapic code checks to see that v1 is down; finds that it is down,
> schedules the tasklet, returns X86_EMUL_RETRY
> * Tasklet runs, brings up v1
> * Credit 2 pre-empts v0, allowing v1 to run
> * v1 starts running
> * v1 does some diagnostics, and takes itself offline.
> * v0 re-executes the instruction, finds that v1 is down, and again
> schedules the tasklet and returns X86_EMUL_RETRY.
> * For some reason the tasklet doesn't actually bring up v1 again
> (presumably because it hasn't had an APIC_DM_INIT again); so v0 is
> stuck doing X86_EMUL_RETRY forever.
> 
> The problem is that VPF_down is used as the test to see if the tasklet
> has finished its work; but there's no guarantee that the scheduler
> will run v0 before v1 has come up and gone back down again.
> 
> I discussed this with Tim, and we agreed that we should ask you.
> 
> One option would be to simply make vlapic_schedule_sipi_init_ipi()
> always return X86_EMUL_OK, but we weren't sure if that might cause
> some other problems.
> 
> The "right" solution, if synchronization is needed, is to have an
> explicit signal sent back that the instruction can be allowed to
> complete, rather than relying on reading VPF_down, which may cause
> races.
> 
> Thoughts?
> 
>  -George
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.