[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen/x86: vpmu: Unmap per-vCPU PMU page when the domain is destroyed



Hi,

On 27/11/2019 09:44, Jan Beulich wrote:
On 26.11.2019 18:17, Paul Durrant wrote:
From: Julien Grall <jgrall@xxxxxxxxxx>

A guest will setup a shared page with the hypervisor for each vCPU via
XENPMU_init. The page will then get mapped in the hypervisor and only
released when XEMPMU_finish is called.

This means that if the guest is not shutdown gracefully (such as via xl
destroy), the page will stay mapped in the hypervisor.

Isn't this still too weak a description? It's not the tool stack
invoking XENPMU_finish, but the guest itself afaics. I.e. a
misbehaving guest could prevent proper cleanup even with graceful
shutdown.

@@ -2224,6 +2221,9 @@ int domain_relinquish_resources(struct domain *d)
      if ( is_hvm_domain(d) )
          hvm_domain_relinquish_resources(d);
+ for_each_vcpu ( d, v )
+        vpmu_destroy(v);
+
      return 0;
  }

I think simple things which may allow shrinking the page lists
should be done early in the function. As vpmu_destroy() looks
to be idempotent, how about leveraging the very first
for_each_vcpu() loop in the function (there are too many of them
there anyway, at least for my taste)?

This is not entirely obvious that vpmu_destroy() is idempotent.

For instance, I can't find out who is clearing VCPU_CONTEXT_ALLOCATED. so I think vcpu_arch_destroy() would be executed over and over.

I don't know whether this is an issue, but I can't figure out that is it not one. Did I miss anything?

Cheers,

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.