[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] xen/x86: vpmu: Unmap per-vCPU PMU page when the domain is destroyed
> -----Original Message----- > From: Jan Beulich <jbeulich@xxxxxxxx> > Sent: 27 November 2019 09:44 > To: Durrant, Paul <pdurrant@xxxxxxxxxx>; Grall, Julien <jgrall@xxxxxxxxxx> > Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx; Andrew Cooper > <andrew.cooper3@xxxxxxxxxx>; Roger Pau Monné <roger.pau@xxxxxxxxxx>; Wei > Liu <wl@xxxxxxx> > Subject: Re: [PATCH] xen/x86: vpmu: Unmap per-vCPU PMU page when the > domain is destroyed > > On 26.11.2019 18:17, Paul Durrant wrote: > > From: Julien Grall <jgrall@xxxxxxxxxx> > > > > A guest will setup a shared page with the hypervisor for each vCPU via > > XENPMU_init. The page will then get mapped in the hypervisor and only > > released when XEMPMU_finish is called. > > > > This means that if the guest is not shutdown gracefully (such as via xl > > destroy), the page will stay mapped in the hypervisor. > > Isn't this still too weak a description? It's not the tool stack > invoking XENPMU_finish, but the guest itself afaics. I.e. a > misbehaving guest could prevent proper cleanup even with graceful > shutdown. > Ok, how about 'if the guest fails to invoke XENPMU_finish, e.g. if it is destroyed, rather than cleanly shut down'? > > @@ -2224,6 +2221,9 @@ int domain_relinquish_resources(struct domain *d) > > if ( is_hvm_domain(d) ) > > hvm_domain_relinquish_resources(d); > > > > + for_each_vcpu ( d, v ) > > + vpmu_destroy(v); > > + > > return 0; > > } > > I think simple things which may allow shrinking the page lists > should be done early in the function. As vpmu_destroy() looks > to be idempotent, how about leveraging the very first > for_each_vcpu() loop in the function (there are too many of them > there anyway, at least for my taste)? > Ok. I did wonder where in the sequence was best... Leaving to the end obviously puts it closer to where it was previously called, but I can't see any harm in moving it earlier. Paul > Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |