[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/vm_event: add gdtr_base to the vm_event structure



On 5/2/19 4:09 PM, Tamas K Lengyel wrote:
On Thu, May 2, 2019 at 2:57 AM Jan Beulich <JBeulich@xxxxxxxx> wrote:

On 02.05.19 at 10:25, <rcojocaru@xxxxxxxxxxxxxxx> wrote:
On 5/2/19 2:57 AM, Tamas K Lengyel wrote:
Receiving this register is useful for introspecting 32-bit Windows when the
event being trapped happened while in ring3.

Signed-off-by: Tamas K Lengyel <tamas@xxxxxxxxxxxxx>
Cc: Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx>
Cc: Tamas K Lengyel <tamas@xxxxxxxxxxxxx>
Cc: Jan Beulich <jbeulich@xxxxxxxx>
Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Cc: Wei Liu <wei.liu2@xxxxxxxxxx>
Cc: Roger Pau Monne <roger.pau@xxxxxxxxxx>
---
   xen/arch/x86/vm_event.c       | 5 +++++
   xen/include/public/vm_event.h | 3 ++-
   2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/vm_event.c b/xen/arch/x86/vm_event.c
index 51c3493b1d..873788e32f 100644
--- a/xen/arch/x86/vm_event.c
+++ b/xen/arch/x86/vm_event.c
@@ -179,6 +179,10 @@ static void vm_event_pack_segment_register(enum
x86_segment segment,
           reg->es_sel = seg.sel;
           break;

+    case x86_seg_gdtr:
+        reg->gdtr_base = seg.base;
+        break;

Please also add limit, ar, sel, like the others do.

There's no ar or sel for GDT (and IDT). Instead, because of ...

In addition to
making this modification look less strange (since, in contrast to the
function name, nothing is packed for gdtr_base), it will also save
future work for applications wanting to use gdtr which also require
backwards compatibility with previous vm_event versions.

As you know, for each such modification we need to have a separate
vm_event_vX header in our applications so that we're ready to create a
ring buffer using requests and replies of the right size, and also to be
able to properly interpret the ring buffer data, so the least frequent
changes to the vm_event struct, the better.

... this I wonder whether the IDT details shouldn't get added at
the same time.

The churn of the header is not that big of a deal - I do the same in
LibVMI and it's a largely copy-paste job that takes perhaps ~5m to add
(see 
https://github.com/tklengyel/libvmi/commit/7509ce56d0408dbec4e374b8780da69a7bd212e8).
So that should not be a factor in deciding whether we add a needed
extra piece or not. Since the vm_event ABI is changing for Xen 4.13
now the ABI version will only be bumped once for 4.13. So we should be
able to add/remove/shuffle whatever we want while 4.13 merge window is
open.

That said I don't have a use for idt and gdtr_limit that warrants
having to receive it via the vm_event structure - those pieces are
always just a hypercall away if needed. So in the vm_event structure I
tend to just put the registers needed often enough to warrant avoiding
that extra hypercall. But if Razvan says he has a use for them, I can
add them here.

We do use them both - idtr and gdtr, both base and limit, but we are indeed getting them via hypercall now. Considering that, since we did add gdtr_base I think it would be great if we could also add gdtr_limit.

Adding idtr as well would _theoretically_ be a nice speed optimization (removing the need for a hypercall), but it might also be a speed pessimization generally applicable to increasing the vm_event size (since a VCPU with no more space left in the vm_event ring buffer will be blocked more and go through more locking logic).

My point is, at the moment it's fine to skip idtr if it's not required by Jan, but if we do add either then let's please add both _base and _limit.

In a hopefully near future we'll be able to use as much memory as necessary for storing vm_event data and just cache everything in the ring buffer, avoing all the "get context" hypercalls.


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.