[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86: use VMLOAD for PV context switch



>>> On 17.08.18 at 00:04, <brian.woods@xxxxxxx> wrote:
> On Tue, Jul 10, 2018 at 04:14:11AM -0600, Jan Beulich wrote:
>> Having noticed that VMLOAD alone is about as fast as a single of the
>> involved WRMSRs, I thought it might be a reasonable idea to also use it
>> for PV. Measurements, however, have shown that an actual improvement can
>> be achieved only with an early prefetch of the VMCB (thanks to Andrew
>> for suggesting to try this), which I have to admit I can't really
>> explain. This way on my Fam15 box context switch takes over 100 clocks
>> less on average (the measured values are heavily varying in all cases,
>> though).
>> 
>> This is intentionally not using a new hvm_funcs hook: For one, this is
>> all about PV, and something similar can hardly be done for VMX.
>> Furthermore the indirect to direct call patching that is meant to be
>> applied to most hvm_funcs hooks would be ugly to make work with
>> functions having more than 6 parameters.
>> 
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> 
> I have confirmed it with a senior hardware engineer and using vmload in
> this fashion is safe and recommended for performance.  As far as using
> vmload with PV.
> 
> Acked-by: Brian Woods <brian.woods@xxxxxxx>

Thanks. There's another aspect in this same area that I'd like to
improve, and hence seek clarification on up front: Currently SVM
code uses two pages per CPU, one for host_vmcb and the other
for hsa. Afaict the two uses are entirely dis-joint: The host save
area looks to be simply yet another VMCB, and the parts accessed
during VMRUN / VM exit are fully separate from the ones used by
VMLOAD / VMSAVE. Therefore I think both could be folded,
reducing code size as well as memory (and perhaps cache) footprint.

I think this separation was done because the PM mentions both
data structures separately, but iirc there's nothing said anywhere
that the two structures indeed need to be distinct.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.