[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/vpmu: Add get/put_vpmu() and VPMU_ENABLED





On 02/16/2017 12:09 PM, Andrew Cooper wrote:
On 16/02/17 16:59, Jan Beulich wrote:
On 16.02.17 at 15:59, <boris.ostrovsky@xxxxxxxxxx> wrote:
vpmu_enabled() (used by hvm/pv_cpuid() to properly report 0xa leaf
for Intel processors) is based on the value of VPMU_CONTEXT_ALLOCATED
bit. This is problematic:
* For HVM guests VPMU context is allocated lazily, during the first
  access to VPMU MSRs. Since the leaf is typically queried before guest
  attempts to read or write the MSRs it is likely that CPUID will report
  no PMU support
* For PV guests the context is allocated eagerly but only in responce to
  guest's XENPMU_init hypercall. There is a chance that the guest will
  try to read CPUID before making this hypercall.

This patch introduces VPMU_ENABLED flag which is set (subject to vpmu_mode
constraints) during VCPU initialization for both PV and HVM guests. Since
this flag is expected to be managed together with vpmu_count, get/put_vpmu()
are added to simplify code.
I think VPMU_ENABLED is misleading, as it may as well mean the state
after the guest did enable it. How about VPMU_AVAILABLE?

The problem is a little deeper than that.

First, there is whether it is available based on hypervisor configuration.

This bit is set only if vpmu_mode permits it.


Second, if it is available, has the toolstack chosen to allow the domain
to use it.  This should determine whether features/information are
visible in CPUID.

You mean if toolstack masks out leaf 0xa on Intel? I chould check this in get_vpmu(). Is this information available by the time vcpu_initialise() runs?


Finally, if vpmu is permitted, has the domain turned it on.

HVM domains always do and PV domains essentially too.


-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.