[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-users] cpupool-numa-split broken with pinned cpus?
Seeing this output, previously (around xen 4.4) it used to work despite the pinned cpus. # xl cpupool-numa-splitlibxl: error: libxl.c:5516:libxl_set_vcpuonline: Requested 12 VCPUs, however maxcpus is 2!: Success error on removing vcpus for Domain-0 And then nothing has changed. # xl cpupool-list Name CPUs Sched Active Domain count Pool-node0 24 credit y 7 # xl cpupool-list Name CPUs Sched Active Domain count Pool-node0 24 credit y 7 # xl vcpu-list 0Name ID VCPU CPU State Time(s) Affinity (Hard / Soft) Domain-0 0 0 0 r-- 65.2 0 / all Domain-0 0 1 1 -b- 67.8 1 / allTwo issues. One is that the message says success, when no such thing occured. I'm also thinking this may be wrong behaviour, since the number of pinned cores is lower the the number of cores on the first node. It may be that the test in xl.c around line 5515 should allow this case? ref https://fossies.org/dox/xen-4.6.1/libxl_8c_source.htmlFWIW this machine has dual proc E5-2620 with 64G RAM. the first few lines from xl dmesg are... # xl dmesg Xen 4.6.1(XEN) Xen version 4.6.1 (mockbuild@xxxxxxxxx) (gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-16)) debug=n Tue Apr 19 10:53:28 AEST 2016 (XEN) Latest ChangeSet: (XEN) Bootloader: GNU GRUB 0.97(XEN) Command line: dom0_mem=2048M cpufreq=xen dom0_max_vcpus=2 dom0_vcpus_pin Happy to test or provide additional info as required, I have this machine as a testbed for a wee while. Or be otherwise corrected if this is in fact desired behaviour. Regards, Glenn http://ri.mu - Startups start here. Hosting. DNS. Web Programming. Email. Backups. Monitoring.Thanks _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxx http://lists.xen.org/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |