[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 5/8] x86/vm-event/monitor: don't compromise monitor_write_data on domain cleanup



On 7/4/2016 5:13 PM, Jan Beulich wrote:
On 04.07.16 at 15:28, <czuzu@xxxxxxxxxxxxxxx> wrote:
On 7/4/2016 4:11 PM, Jan Beulich wrote:
On 04.07.16 at 15:03, <czuzu@xxxxxxxxxxxxxxx> wrote:
On 7/4/2016 3:47 PM, Jan Beulich wrote:
On 30.06.16 at 20:45, <czuzu@xxxxxxxxxxxxxxx> wrote:
The arch_vm_event structure is dynamically allocated and freed @
vm_event_cleanup_domain. This cleanup is triggered e.g. when the toolstack
user
disables domain monitoring (xc_monitor_disable), which in turn effectively
discards any information that was in arch_vm_event.write_data.
Isn't that rather a toolstack user bug, not warranting a relatively
extensive (even if mostly mechanical) hypervisor change like this
one? Sane monitor behavior, after all, is required anyway for the
monitored guest to survive.
Sorry but could you please rephrase this, I don't quite understand what
you're saying.
The write_data field in arch_vm_event should _not ever_ be invalidated
as a direct result of a toolstack user's action.
The monitoring app can cause all kinds of problems to the guest it
monitors. Why would this specific one need taking care of in the
hypervisor, instead of demanding that the app not disable monitoring
at the wrong time?
Because it wouldn't be the wrong time to disable monitoring.
This is not a case of wrong toolstack usage, but rather a case of
xc_monitor_disable doing the wrong thing along the way.

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -473,24 +473,24 @@ void hvm_do_resume(struct vcpu *v)
        if ( !handle_hvm_io_completion(v) )
            return;
- if ( unlikely(v->arch.vm_event) )
+    if ( unlikely(v->arch.vm_event.emulate_flags) )
        {
-        if ( v->arch.vm_event->emulate_flags )
-        {
-            enum emul_kind kind = EMUL_KIND_NORMAL;
+        enum emul_kind kind;
- if ( v->arch.vm_event->emulate_flags &
-                 VM_EVENT_FLAG_SET_EMUL_READ_DATA )
-                kind = EMUL_KIND_SET_CONTEXT;
-            else if ( v->arch.vm_event->emulate_flags &
-                      VM_EVENT_FLAG_EMULATE_NOWRITE )
-                kind = EMUL_KIND_NOWRITE;
+        ASSERT(v->arch.vm_event.emul_read_data);
- hvm_mem_access_emulate_one(kind, TRAP_invalid_op,
-                                       HVM_DELIVER_NO_ERROR_CODE);
+        kind = EMUL_KIND_NORMAL;
Please keep this being the initializer of the variable.
I put it there because of the ASSERT (to do that before anything else),
but I will undo if you prefer.
Since the initializer is (very obviously) independent of the
condition the ASSERT() checks, I indeed would prefer it to remain
the way it is before this change.

-            v->arch.vm_event->emulate_flags = 0;
-        }
+        if ( v->arch.vm_event.emulate_flags & VM_EVENT_FLAG_SET_EMUL_READ_DATA
)
Long line.
Long but under 80 columns, isn't that the rule? :-)
I've counted 81 here.
You may have counted the beginning '+' as well. Is the rule "<= 80
columns in the source file" (in which case you're wrong) or is it "<= 80
columns in the resulting diff" (in which case I'm wrong)?
Ah - you first said "under 80" but now say "<= 80". The former
is what ./CODING_STYLE asks for. (And ftr the 81 included the
newline, so indeed I was off by one too.)

Jan

Hmm, I meant "<= 80" the first time too, I was under the impression that that's what CODING_STYLE asks for too, but, quoting:
    'Lines should be less than 80 characters in length.'
Why not <= 80? Shouldn't the limit (i.e. "<=") be 80 columns, including the 80-th? (this points me otherwise http://programmers.stackexchange.com/questions/148677/why-is-80-characters-the-standard-limit-for-code-width )

Corneliu.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.