[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen 4.6.0-rc1 build with lock_profile=y crash_debug=y, frame_pointer=y and domain.c:241: error: negative width in bit-field â<anonymous>â



On Wed, Aug 26, 2015 at 01:32:54AM -0600, Jan Beulich wrote:
> >>> On 25.08.15 at 21:54, <konrad.wilk@xxxxxxxxxx> wrote:
> > On Tue, Aug 25, 2015 at 03:52:15PM -0400, Konrad Rzeszutek Wilk wrote:
> >> On Tue, Aug 25, 2015 at 06:41:06PM +0100, Andrew Cooper wrote:
> >> > On 25/08/15 18:09, Konrad Rzeszutek Wilk wrote:
> >> > > --- a/xen/arch/x86/domain.c
> >> > > +++ b/xen/arch/x86/domain.c
> >> > > @@ -238,7 +238,9 @@ struct domain *alloc_domain_struct(void)
> >> > >      if ( unlikely(!bits) )
> >> > >           bits = _domain_struct_bits();
> >> > >  
> >> > > +#ifndef LOCK_PROFILE
> >> > >      BUILD_BUG_ON(sizeof(*d) > PAGE_SIZE);
> >> > > +#endif
> >> > >      d = alloc_xenheap_pages(0, MEMF_bits(bits));
> >> > >      if ( d != NULL )
> >> > >          clear_page(d);
> >> > >
> >> > > (not compile tested nor runtime tested)
> >> > 
> >> > Either remove it locally for debugging, or use something like
> > 
> > I forgot to mention that by removing it locally Xen ends up halting at:
> > 
> > (XEN) Detected 3292.657 MHz processor.
> 
> Sure - as Andrew said on irc, neither simply removing it nor replacing
> it by vmalloc() can actually work. At the very least you'd want the
> above to become

I ended up doing:

From 52548151b6db3724fdafd005f52fc1aaefa65eff Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Date: Tue, 25 Aug 2015 21:25:54 -0400
Subject: [PATCH] x86: Allow 'struct domain' to expand past PAGE_SIZE

If we are building with lock profilling enabled - as that
expands the spinlock structures considerably.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
---
 xen/arch/x86/domain.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 045f6ff..7df58d8 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -237,9 +237,10 @@ struct domain *alloc_domain_struct(void)
 
     if ( unlikely(!bits) )
          bits = _domain_struct_bits();
-
+#ifndef LOCK_PROFILE
     BUILD_BUG_ON(sizeof(*d) > PAGE_SIZE);
-    d = alloc_xenheap_pages(0, MEMF_bits(bits));
+#endif
+    d = alloc_xenheap_pages(get_order_from_bytes(sizeof(*d)), MEMF_bits(bits));
     if ( d != NULL )
         clear_page(d);
     return d;
@@ -261,7 +262,7 @@ struct vcpu *alloc_vcpu_struct(void)
      * structure must satisfy this restriction. Thus we specify MEMF_bits(32).
      */
     BUILD_BUG_ON(sizeof(*v) > PAGE_SIZE);
-    v = alloc_xenheap_pages(0, MEMF_bits(32));
+    v = alloc_xenheap_pages(get_order_from_bytes(sizeof(*v)), MEMF_bits(32));
     if ( v != NULL )
         clear_page(v);
     return v;
-- 
2.1.0


which made it compile and boot. However the moment I ran xenlockprof I got
this nice splash:

(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82d08012e0fa>] spinlocerate+0x5b/0x96
(XEN) RFLAGS: 0000000000010202   CONTEXT: hypervisor (d0v0)
(XEN) rax: ffff83009e6f7e40   rbx: c2c2c2c2c2c2c2c2   rcx: ffff83009e6f7d48
(XEN) rdx: 000000000000003d   rsi: 0000000000000001   rdi: c2c2c2c2c2c2c2c2
(XEN) rbp: ffff83009e6f7d38   rsp: ffff83009e6f7cf8   r8:  00000000deadbeef
(XEN) r9:  00000000deadbeef   r10: ffff82d0802542a0   r11: 0000000000000286
(XEN) r12: ffff830413381078   r13: 0000000000000001   r14: ffff82d08012da37
(XEN) r15: ffff83009e6f7d48   cr0: 0000000080050033   cr4: 00000000000426e0
(XEN) cr3: 0000000411303000   cr2: 00007f42d4a6e9a0
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff83009e6f7cf8:
(XEN)    0000000000411fc1 ffff82d080327ef0 ffff830400000000 ffff83009e6f7e40
(XEN)    00007f42d4ea4004 ffff83009e6f7f18 ffff83009e6f7f18 ffff82d08031be00
(XEN)    ffff83009e6f7d68 ffff82d08012e1e5 ffff83009e6f7e40 ffff82d000000000
(XEN)    0000000000000000 ffffffffffffffff ffff83009e6f7ef8 ffff82d08012f3fa
(XEN)    ffff83009e6f7d98 ffff83042c186000 ffff83009e8b0000 ffff83009e6f7d98
(XEN)    ffff83009e6f7f18 ffff83009e6f7f18 ffff83009e6f7f18 ffff83009e6f7f18
(XEN)    ffff83009e6f7f18 ffff83009e6f7f18 ffff83009e6f7f18 0000000000000000
(XEN)    ffff83009e6f7eb0 ffff83009e6f7ea8 ffff88008020ab10 0000000000411fc1
(XEN)    0000000000000000 ffff830000000007 0000000300007ff0 ffff820040008000
(XEN)    ffff83009e8b0000 8000000402e3d067 0000000400000002 ffff83009e6f7f18
(XEN)    0000000c0000000f 0000000000000002 00007ffd0000003e 0000000000000000
(XEN)    0000000000000000 00007f42d4a5a8e8 00000000010eb050 0000000000000000
(XEN)    00007ffd86253bd0 00000000004008f0 00007ffd86253cb0 00007f42d4ca07c5
(XEN)    00000000010eb050 00007f42d47ae695 00007ffd86253b68 00007ffd86253b68
(XEN)    00000000010eb050 0000000000000033 ffff83009e6f7ed8 ffff83009e8b0000
(XEN)    ffff880068795900 ffff88007395dd88 0000000000000003 00007ffd86253900
(XEN)    00007cff619080c7 ffff82d0802396d2 ffffffff8100146a 0000000000000023
(XEN)    0000000000000000 0000000000000000 00007ffd86253cb0 00000000004008f0
(XEN)    ffff8800686d3e68 00007ffd86253900 0000000000000286 00007ffd862539b0
(XEN)    00007ffd86253b68 00007f42d4c95256 0000000000000023 ffffffff8100146a
(XEN) Xen call trace:
(XEN)    [<ffff82d08012e0fa>] spinlock_profile_iterate+0x5b/0x96
(XEN)    [<ffff82d08012e1e5>] spinlock_profile_control+0x52/0x6c
(XEN)    [<ffff82d08012f3fa>] do_sysctl+0x33a/0x10c0
(XEN)    [<ffff82d0802396d2>] lstar_enter+0xe2/0x13c
(XEN) 
(XEN) 
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) GENERAL PROTECTION FAULT
(XEN) [error_code=0000]
(XEN) ****************************************
(XEN) 
(XEN) Reboot in five seconds...
(XEN) Resetting with ACPI MEMORY or I/O RESET_REG.

Off to figure this one out (http://darnok.org/xen-syms is where the binary is
in case any one else wants to jump on this) next week.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.