[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Xen 4.6 and Intel MKL Linpack behaviour



On 7 May 2016 at 16:41, Marko Đukić <marko.djukic@xxxxxxxxx> wrote:
>
> Hello!
>
> I am doing some testing with Xen by running some benchmarking software inside 
> a virtual machine. I have set up a VM with 4 vCPUs, the host hardware has 1 
> CPU with 4 cores. Both host and guest are running ubuntu server 16.04.
>
> When running Intel MKL Linpack 
> (https://software.intel.com/en-us/articles/intel-mkl-benchmarks-suite) inside 
> a PVH or PV guest Linpack only detects 1 CPU with 1 core (output at the start 
> of the program). If I change the guest configuration to HVM, Linpack detects 
> 1 CPU with 4 cores.
>
> Is this a bug in PV/PVH mode or am I missing a configuration setting?
>
> The configuration file for PV guest is as follows:
>
> name = "ubuntu64"
> bootloader = "/usr/lib/bin/xen-4.6/bin/pygrub"
> memory = 8192
> vcpus = 4
>
> vif = ['script=vif-bridge,bridge=br0']
> disk = ['tap:aio:/home/vm/xen_ubuntu16.04_amd64.raw,xvda,w' ]
>
> For PVH guest just pvh = 1 is added.
>
> Regards
>
> Marko


I checked the /proc/cpuinfo of PVH and HVM guest and saw a difference
in cores - in HVM guest cpuinfo shows 4 cores, in PVH cpuinfo shows
only one core.
Google then pointed me out that a similar question has apparently
already been answered (http://markmail.org/message/vpj3ajlg6h7fkzro):
quote (Wei Liu):
"I guess LINPACK relies heavily on using topology to make decisions on
how it should run?

Give guest host-cpu toplogy -- no, it's not possible to do this at the
moment. That's a feature under development."

Can anyone explain why vcpus are presented differently to the guest if
we look at PV/PVH and HVM?

Regards

Marko

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.