[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH]HVM acpi guest OS suppot in piix4 ACPI event logical model-part 2 of 4

  • To: "Wang, Winston L" <winston.l.wang@xxxxxxxxx>, "Tang Liang" <tangliang@xxxxxxxxxx>
  • From: "Christian Limpach" <christian.limpach@xxxxxxxxx>
  • Date: Thu, 29 Jun 2006 13:33:37 +0100
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Thu, 29 Jun 2006 05:34:01 -0700
  • Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=hjX7gyAbXm0IHn77SaJU7oHVqEXurMNP0rWwvuRrkQRNsCpvq2YLZYCYdLgns+/lv6DTyNFNVMpbr/yzmQx9tLf9cLynwQ8T55ADel+7wAAcdSHNO3tozDGUUk/QCyPVkn3dnjHNV0nTBoxVeA0DgiCU7iBpJQa0pdOVcavIC+w=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>


On 6/17/06, Wang, Winston L <winston.l.wang@xxxxxxxxx> wrote:
Attached please see the hvm guest os acpi patch part 2 of
ACPI timer is required during guest windows installation and boot.

The qemu timer used to implement the ACPI timer doesn't seem to work
quite right, the time it is set to expire becomes immediately out of
syn with qemu's vm_clock and this then causes the timer to pretty much
fire all the time, resulting in qemu-dm using between 20 and 30% of
CPU on my machine.

How about the following change:
--- tools/ioemu/hw/piix4acpi.c  2006-06-27 11:12:20.000000000 +0100
+++ tools/ioemu.hg/hw/piix4acpi.c       2006-06-29 09:54:56.513574005 +0100
@@ -111,7 +110,8 @@
static void pm_timer_update(void *opaque)
    PMTState *s = opaque;
-    s->next_pm_time += muldiv64(1, ticks_per_sec,FREQUENCE_PMTIMER);
+    s->next_pm_time = qemu_get_clock(vm_clock) +
+        muldiv64(1, ticks_per_sec,FREQUENCE_PMTIMER);
    qemu_mod_timer(s->pm_timer, s->next_pm_time);
    acpi_state->pm1_timer ++;

I'm not convinced if vm_clock actually works at all for us.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.