[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] XL ballooning issue with 96gb VM



On Mon, 2014-03-31 at 12:59 -0700, Saurabh Mishra wrote:


> We are using Xen 4.2.4


> lc-1:~ # xl list
> Name                                        ID   Mem VCPUs      State   
> Time(s)
> Domain-0                                     0  8151     4     r-----    
> 1681.3
> pvm-01-1                                     1  8187     4     -b----     
> 256.4
> pool1-slot1                                  2 98299    32     -b----      
> 81.6

Here you have 2 ~8GB guest domains and one ~96GB domain, which is a
total of 112GB. On your 132GB host that leaves 36GB spare, which is
probably why starting a 96GB guest fails (the error message could be
clearer, you might have found an additional hint in "xl dmesg").

If the above is anomalous and you think there shou


> 
> lc-1:~ # grep dom0 /boot/efi/efi/SuSE/xen.cfg
> 
> options=crashkernel=256M@16M console=com1 com1=115200 dom0_mem=8192m
> iommu=1,sharept extra_guest_irqs=80 dom0_max_vcpus=4 dom0_vcpus_pin
> no-bootscrub
> 
> 
> -------------xl create start Sat Mar 29 13:59:49 UTC
> 2014--------------------
> 
> WARNING: ignoring "kernel" directive for HVM guest. Use
> "firmware_override" instead if you really want a non-default firmware
> 
> WARNING: ignoring device_model directive.
> 
> WARNING: Use "device_model_override" instead if you really want a
> non-default device_model
> 
> libxl: debug: libxl_create.c:1192:do_domain_create: ao 0x625390:
> create: how=(nil) callback=(nil) poller=0x624850
> 
> libxl: debug: libxl_device.c:245:libxl__device_disk_set_backend: Disk
> vdev=hda spec.backend=unknown
> 
> libxl: debug: libxl_device.c:191:disk_try_backend: Disk vdev=hda,
> backend phy unsuitable as phys path not a block device
> 
> libxl: debug: libxl_device.c:281:libxl__device_disk_set_backend: Disk
> vdev=hda, using backend tap
> 
> libxl: debug: libxl_create.c:694:initiate_domain_create: running
> bootloader
> 
> libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV
> domain, skipping bootloader
> 
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
> w=0x625920: deregister unregistered
> 
> xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0x9cc04
> 
> xc: detail: elf_parse_binary: memory: 0x100000 -> 0x19cc04
> 
> xc: info: VIRTUAL MEMORY ARRANGEMENT:
> 
>   Loader:        0000000000100000->000000000019cc04
> 
>   Modules:       0000000000000000->0000000000000000
> 
>   TOTAL:         0000000000000000->00000017ff800000
> 
>   ENTRY ADDRESS: 0000000000100000
> 
> xc: detail: Failed allocation for dom 2: 2048 extents of order 0
> 
> xc: error: Could not allocate memory for HVM guest. (16 = Device or
> resource busy): Internal error
> 
> libxl: error: libxl_dom.c:656:libxl__build_hvm: hvm building failed
> 
> libxl: error: libxl_create.c:919:domcreate_rebuild_done: cannot
> (re-)build domain: -3
> 
> libxl: error: libxl_dm.c:1262:libxl__destroy_device_model: could not
> find device-model's pid for dom 2
> 
> libxl: error: libxl.c:1419:libxl__destroy_domid:
> libxl__destroy_device_model failed for 2
> 
> libxl: debug: libxl_event.c:1568:libxl__ao_complete: ao 0x625390:
> complete, rc=-3
> 
> libxl: debug: libxl_create.c:1205:do_domain_create: ao 0x625390:
> inprogress: poller=0x624850, flags=ic
> 
> libxl: debug: libxl_event.c:1540:libxl__ao__destroy: ao 0x625390:
> destroy
> 
> xc: debug: hypercall buffer: total allocations:18274 total
> releases:18274
> 
> xc: debug: hypercall buffer: current allocations:0 maximum
> allocations:2
> 
> xc: debug: hypercall buffer: cache current size:2
> 
> xc: debug: hypercall buffer: cache hits:18263 misses:2 toobig:9
> 
> Parsing config from /root/vmmgr/hvmmgr/.hvmmgrd/vms/pool1-vm6.cfg
> 
> -------------xl create end--------------------
> 
> 
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxx
> http://lists.xen.org/xen-users



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.