[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Xen 4.6 and Intel MKL Linpack behaviour

On Mon, 2016-05-09 at 11:34 +0200, Roger Pau Monné wrote:
> Adding Wei and Dario who did some work on topology before.
Thanks Roger,

> On Sun, May 08, 2016 at 01:44:48PM +0200, Marko Đukić wrote:
> > 
> > I checked the /proc/cpuinfo of PVH and HVM guest and saw a
> > difference
> > in cores - in HVM guest cpuinfo shows 4 cores, in PVH cpuinfo shows
> > only one core.
> > Google then pointed me out that a similar question has apparently
> > already been answered (http://markmail.org/message/vpj3ajlg6h7fkzro
> > ):
> > Can anyone explain why vcpus are presented differently to the guest
> > if
> > we look at PV/PVH and HVM?
> There are several differences between PV(H) and HVM. For once, the
> cpuid 
> information returned to PV and HVM guests is different. Also, PV(H)
> guests 
> don't have ACPI tables, which is were some of this topology is
> reported. HVM 
> guests OTOH have ACPI tables that report a sensible topology.

> You should try to figure out how Linpack finds out about the CPU
> topology, 
> and then we can maybe try to fix it.
Well, I'm not sure. For example, if you give  4 vcpus to a PV guest
(via "vcpus=4" in the config file), 'cat /proc/cpuinfo', done from
inside the guest, should be showing info for 4 processors, is this the
case ro not?

If it is not, there are other issues...

<<This happens because I choose it to happen!>> (Raistlin Majere)
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.