[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] long latency of domain shutdown


  • To: Jan Beulich <jbeulich@xxxxxxxxxx>
  • From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
  • Date: Thu, 08 May 2008 15:38:14 +0100
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Thu, 08 May 2008 07:38:51 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcixGSVpY8zewB0MEd28OwAX8io7RQ==
  • Thread-topic: [Xen-devel] long latency of domain shutdown

On 8/5/08 15:29, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote:

> Hmm, storing this in page_info seems questionable to me. It'd be at
> least 18 bits (on x86-64) that we'd need. I think this rather has to go
> into struct vcpu.

We can, for example, reuse tlbflush_timestamp for this purpose. Stick it in
the vcpu structure and I think we make life hard for ourselves. What if the
guest does not resume the hypercall, for example? What if the guest goes and
tries to execute a different hypercall instead?

> But what worries me more is that (obviously) any affected page will
> have to have its PGT_validated bit kept clear, which could lead to
> undesirable latencies in spin loops on other vcpus waiting for it to
> become set. In the worst case this could lead to deadlocks (at least
> in the UP case or when multiple vCPU-s of one guest are pinned to
> the same physical CPU) afaics. Perhaps this part could indeed be
> addressed with a new PGT_* bit, upon which waiters could exit
> their spin loops and consider themselves preempted.

Yes, the page state machine does need some more careful thought. I'm pretty
sure we have enough page state bits though.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.