[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] soft lockups during live migrate..
Hi, At 03:06 +0000 on 06 Nov (1257476764), Mukesh Rathor wrote: > Ok, I'm confused. It appears oos disable is relevant for hvm only... I'm > running PV. > > sh_update_paging_modes() > .... > if ( is_hvm_domain(d) && !d->arch.paging.shadow.oos_off ) <--- > ... Hmmm. It looks like we never unsync for PV guests. There's probably no reason not to, but it probably wouldn't help all that much since we already intercept all PV pagetable updates. It's a bit surprising that sh_resync_all() is the function your CPU is stopped in. Is that consistently the case or was it just one example? I suppose for 32 VCPUs it does a lot of locking and unlocking of the shadow lock. You could try adding if ( !d->arch.paging.shadow.oos_active ) return; at the top of that function and see if it helps. > Also, > >> Actually, things are fine with 32GB/32vcpus. Problem happens with > >> 64GB/32vcpus. Trying the unstable version now. > > >Interesting. Have you tried increading the amount of shadow memeory > >you give to the guest? IIRC xend tries to pick a sensible default but > >if it's too low and you start thrashing things can get very slow indeed. > > What do you recommend I start with for 32VCPUs and 64GB? It really depends on the workload. I think the default for a 64GiB domain will be about 128MiB, so maybe try 256MiB and 512MiB and see if it makes a difference. This bit of python will let you change it on the fly: run it with a domid and a shadow allocation in MiB. #!/usr/bin/env python import sys import xen.lowlevel.xc xc = xen.lowlevel.xc.xc() print "%i" % xc.shadow_mem_control(dom=int(sys.argv[1]), mb=int(sys.argv[2])) Tim. -- Tim Deegan <Tim.Deegan@xxxxxxxxxx> Principal Software Engineer, Citrix Systems (R&D) Ltd. [Company #02300071, SL9 0DZ, UK.] _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |