[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-users] XL ballooning issue with 96gb VM
Hi, We are seeing this error in xl create. We have 132GB ram in the host and dom0 memory is restricted to 8GB in the kernel boot parameter because rsync of 4gb files takes longer.
What should be the ideal settings so that we don't get this error? xc: detail: Failed allocation for dom 2: 2048 extents of order 0 xc: error: Could not allocate memory for HVM guest. (16 = Device or resource busy): Internal error libxl: error: libxl_dom.c:656:libxl__build_hvm: hvm building failed libxl: error: libxl_create.c:919:domcreate_rebuild_done: cannot (re-)build domain: -3
We are using Xen 4.2.4 lc-1:~ # uname -a Linux ssc-lc-1 3.0.101-0.15-xen #1 SMP Wed Jan 22 15:49:03 UTC 2014 (5c01f4e) x86_64 x86_64 x86_64 GNU/Linux lc-1:~ # xentop
xentop - 13:54:04 Xen 4.2.4_02-0.7.1 According to http://wiki.xenproject.org/wiki/Tuning_Xen_for_Performance, it says we should set dom0 memory to 4GB?
What can we resolve this problem? Thanks, /Saurabh lc-1:~ # xl list Name ID Mem VCPUs State Time(s)
Domain-0 0 8151 4 r----- 1681.3 pvm-01-1 1 8187 4 -b---- 256.4 pool1-slot1 2 98299 32 -b---- 81.6
lc-1:~ # grep dom0 /boot/efi/efi/SuSE/xen.cfg options=crashkernel=256M@16M console=com1 com1=115200 dom0_mem=8192m iommu=1,sharept extra_guest_irqs=80 dom0_max_vcpus=4 dom0_vcpus_pin no-bootscrub
-------------xl create start Sat
Mar 29 13:59:49 UTC 2014-------------------- WARNING: ignoring "kernel" directive for HVM guest. Use "firmware_override" instead if you really want a non-default firmware WARNING: ignoring device_model directive. WARNING: Use "device_model_override" instead if you really want a non-default device_model libxl: debug: libxl_create.c:1192:do_domain_create: ao 0x625390: create: how=(nil) callback=(nil) poller=0x624850 libxl: debug: libxl_device.c:245:libxl__device_disk_set_backend: Disk vdev=hda spec.backend=unknown libxl: debug: libxl_device.c:191:disk_try_backend: Disk vdev=hda, backend phy unsuitable as phys path not a block device libxl: debug: libxl_device.c:281:libxl__device_disk_set_backend: Disk vdev=hda, using backend tap libxl: debug: libxl_create.c:694:initiate_domain_create: running bootloader libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV domain, skipping bootloader libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x625920: deregister unregistered xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0x9cc04 xc: detail: elf_parse_binary: memory: 0x100000 -> 0x19cc04 xc: info: VIRTUAL MEMORY ARRANGEMENT: Loader: 0000000000100000->000000000019cc04 Modules: 0000000000000000->0000000000000000 TOTAL: 0000000000000000->00000017ff800000 ENTRY ADDRESS: 0000000000100000 xc: detail: Failed allocation for dom 2: 2048 extents of order 0 xc: error: Could not allocate memory for HVM guest. (16 = Device or resource busy): Internal error libxl: error: libxl_dom.c:656:libxl__build_hvm: hvm building failed libxl: error: libxl_create.c:919:domcreate_rebuild_done: cannot (re-)build domain: -3 libxl: error: libxl_dm.c:1262:libxl__destroy_device_model: could not find device-model's pid for dom 2 libxl: error: libxl.c:1419:libxl__destroy_domid: libxl__destroy_device_model failed for 2 libxl: debug: libxl_event.c:1568:libxl__ao_complete: ao 0x625390: complete, rc=-3 libxl: debug: libxl_create.c:1205:do_domain_create: ao 0x625390: inprogress: poller=0x624850, flags=ic libxl: debug: libxl_event.c:1540:libxl__ao__destroy: ao 0x625390: destroy xc: debug: hypercall buffer: total allocations:18274 total releases:18274 xc: debug: hypercall buffer: current allocations:0 maximum allocations:2 xc: debug: hypercall buffer: cache current size:2 xc: debug: hypercall buffer: cache hits:18263 misses:2 toobig:9 Parsing config from /root/vmmgr/hvmmgr/.hvmmgrd/vms/pool1-vm6.cfg -------------xl create end-------------------- _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxx http://lists.xen.org/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |