[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] fpu_taskswitch adjustment proposal


  • To: "xen-devel" <xen-devel@xxxxxxxxxxxxx>
  • From: "Jan Beulich" <JBeulich@xxxxxxxx>
  • Date: Fri, 15 Jun 2012 17:03:59 +0100
  • Delivery-date: Fri, 15 Jun 2012 16:03:41 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>

While pv-ops so far doesn't care to eliminate the two trap-and-
emulate CR0 accesses from the asm/xor.h save/restore
operations, the legacy x86-64 kernel uses conditional clts()/stts()
for this purpose. While looking into whether to extend this to the
newly added (in 3.5) AVX operations there I realized that this isn't
fully correct: It doesn't properly nest inside a kernel_fpu_begin()/
kernel_fpu_end() pair (as it will stts() at the end no matter what
the original state of CR0.TS was).

In order to not introduce completely new hypercalls to overcome
this (fpu_taskswitch isn't really extensible on its own), I'm
considering to add a new VM assist, altering the fpu_taskswitch
behavior so that it would return an indicator whether any change
to the virtual CR0.TS was done. That way, the kernel can
implement a true save/restore cycle here.

In order to allow the kernel to run on older hypervisors without
extra conditionals (behaving the same way as it does currently,
i.e. with the incorrect nesting), the return value 0 (which the
hypercall currently always returns) would need to be used to
indicate that the bit got actually flipped (such that on an old
hypervisor an updated kernel would always think that
something needs to be restored).

Would that be an acceptable solution? Can someone think of
other ways to deal with this? (And - is pv-ops interested in
eliminating the two CR0 access emulations in what is supposed
to be a fast path?)

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.