|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] backport requests for 4.x-testing
> >> > Applied 23225 and 24013. The other, toolstack-related, patches I will
> >> > leave
> >> > for a tools maintainer to ack or apply.
> >>
> > Hey Teck,
> >
> > Thanks for reporting!
> >
> >> With the two backport patches committed in xen-4.1-testing (changeset
> >> 23271:13741fd6253b), xl list or xl create domU will cause 100% CPU and
> >
> > xl list?
>
> After a reboot with no domU running, xl list is fine but if I start a
> hvm domU will be stuck and caused high load then open another ssh
> terminal to issue xl list will stuck as well.
This fix fixes it for me:
diff -r 13741fd6253b xen/arch/x86/domain.c
--- a/xen/arch/x86/domain.c Thu Mar 29 10:20:58 2012 +0100
+++ b/xen/arch/x86/domain.c Thu Mar 29 11:44:54 2012 -0400
@@ -558,9 +558,9 @@ int arch_domain_create(struct domain *d,
d->arch.is_32bit_pv = d->arch.has_32bit_shinfo =
(CONFIG_PAGING_LEVELS != 4);
- spin_lock_init(&d->arch.e820_lock);
}
+ spin_lock_init(&d->arch.e820_lock);
memset(d->arch.cpuids, 0, sizeof(d->arch.cpuids));
for ( i = 0; i < MAX_CPUID_INPUT; i++ )
{
@@ -605,8 +605,8 @@ void arch_domain_destroy(struct domain *
if ( is_hvm_domain(d) )
hvm_domain_destroy(d);
- else
- xfree(d->arch.e820);
+
+ xfree(d->arch.e820);
vmce_destroy_msr(d);
free_domain_pirqs(d);
The issue is that upstream we have two 'domain structs' - one for PV and
one for HVM. In 4.1 it is just 'arch_domain' and the calls to create
the guests are going through the same interface (at least using xl, with
xm they are seperate). And I only initialized the spinlock in the PV case,
but not in the HVM case. This fix to the backport resolves the problem.
Keir, please apply this to my botched back-port of 23225.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |