[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 5/9] x86/PVH: actually show Dom0's register state from debug key '0'


  • To: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Thu, 23 Sep 2021 12:21:42 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=NiOTVezOCSVmFKxp430lQNuv8bpFt1RKty7SyjiGnVo=; b=ZL20qdFnntRplvNgjwW40dDMgaRJQds5jUrMmYRcVfYIk9GQTFoXanSbEmExcHOnwcLpqQeDqgEhNZGaqj9U7QuZfUALfw+cRVfcFbvSQy/Ze/VXWeuyNUFNdTQ1qPmQyLqZN4agtCbcQmV0irQNaNJ9GYbEs+Yxf8taCMMPeDI3lo9VTvRqEuptUQmrNKVmvfchtJjepMar/Zrimifg4G+Qhfay77d2aRUO7JqmsPU7INtpU+w1p5L9b2b1S//d7/WG8sa89QOcfvmFSKSJsSywWmQF7D/MAhneWXTcDDMMTjCaonPApuuPY6GsJs1EUOlQwrKK/bOpmzT+Akm/Gg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=n1YFijOdOmhgTceZlvK6Ihwe58MXzfQQPCf1CMfuBqJUotrJ1q/4tASsFqAQ5q2h9Rf6ljflzc0JUYvOfB5Zm1mfK3UjtOv++vM2o293N7sx4wpT3bHK7yPeGC3AODpuE6uXio1DMzyVuwcXXv2p2wE/yHm+uPHD9vHRSwScKMmWYjrYmnVbQa3MnhT6uYtOnU51p/xreyoYX6B7a4EVghOxtm/Dv+GGBRplKI87AysXFQW4+Rv6VAV2pPq3mz8WpDWQ1nH2M1+qpdiOiKMGg6HUMAry5MbVHNnW+ZY5QHkbd0CHDzu3yI4i3gS5m96C8UFsDZl1oDNhXIGMJ8rUpA==
  • Authentication-results: xen.org; dkim=none (message not signed) header.d=none;xen.org; dmarc=none action=none header.from=suse.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>
  • Delivery-date: Thu, 23 Sep 2021 10:21:56 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 22.09.2021 17:48, Roger Pau Monné wrote:
> On Tue, Sep 21, 2021 at 09:19:06AM +0200, Jan Beulich wrote:
>> vcpu_show_registers() didn't do anything for HVM so far. Note though
>> that some extra hackery is needed for VMX - see the code comment.
>>
>> Note further that the show_guest_stack() invocation is left alone here:
>> While strictly speaking guest_kernel_mode() should be predicated by a
>> PV / !HVM check, show_guest_stack() itself will bail immediately for
>> HVM.
>>
>> While there and despite not being PVH-specific, take the opportunity and
>> filter offline vCPU-s: There's not really any register state associated
>> with them, so avoid spamming the log with useless information while
>> still leaving an indication of the fact.
>>
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>> ---
>> I was pondering whether to also have the VMCS/VMCB dumped for every
>> vCPU, to present full state. The downside is that for larger systems
>> this would be a lot of output.
> 
> At least for Intel there's already a debug key to dump VMCS, so I'm
> unsure it's worth dumping it here also, as a user can get the
> information elsewhere (that's what I've always used to debug PVH
> TBH).

I know there is a respective debug key. That dumps _all_ VMCSes, though,
so might be quite verbose on a big system (where Dom0's output alone
may already be quite verbose).

>> --- a/xen/arch/x86/x86_64/traps.c
>> +++ b/xen/arch/x86/x86_64/traps.c
>> @@ -49,6 +49,39 @@ static void read_registers(struct cpu_us
>>      crs[7] = read_gs_shadow();
>>  }
>>  
>> +static void get_hvm_registers(struct vcpu *v, struct cpu_user_regs *regs,
>> +                              unsigned long crs[8])
> 
> Would this better be placed in hvm.c now that it's a HVM only
> function?

I was asking myself this question, but decided that the placement here
is perhaps at least no bigger of a problem than putting it there.
Factors played into this:
- the specifics of the usage of the crs[8] array,
- the fact that the PV function also lives here, not under pv/,
- the desire to keep the function static.

I can certainly be talked into moving the code, but I will want to see
convincing arguments that none of the three items above (and possible
other ones I may have missed) are really a problem then.

>> @@ -159,24 +173,35 @@ void show_registers(const struct cpu_use
>>  void vcpu_show_registers(const struct vcpu *v)
>>  {
>>      const struct cpu_user_regs *regs = &v->arch.user_regs;

Please note this in addition to my response below.

>> -    bool kernel = guest_kernel_mode(v, regs);
>> +    struct cpu_user_regs aux_regs;
>> +    enum context context;
>>      unsigned long crs[8];
>>  
>> -    /* Only handle PV guests for now */
>> -    if ( !is_pv_vcpu(v) )
>> -        return;
>> -
>> -    crs[0] = v->arch.pv.ctrlreg[0];
>> -    crs[2] = arch_get_cr2(v);
>> -    crs[3] = pagetable_get_paddr(kernel ?
>> -                                 v->arch.guest_table :
>> -                                 v->arch.guest_table_user);
>> -    crs[4] = v->arch.pv.ctrlreg[4];
>> -    crs[5] = v->arch.pv.fs_base;
>> -    crs[6 + !kernel] = v->arch.pv.gs_base_kernel;
>> -    crs[7 - !kernel] = v->arch.pv.gs_base_user;
>> +    if ( is_hvm_vcpu(v) )
>> +    {
>> +        aux_regs = *regs;
>> +        get_hvm_registers(v->domain->vcpu[v->vcpu_id], &aux_regs, crs);
> 
> I wonder if you could load the values directly into v->arch.user_regs,
> but maybe that would taint some other info already there. I certainly
> haven't looked closely.

I had it that other way first, wondering whether altering the structure
there might be safe. It felt wrong to fiddle with the live registers,
and the "const" above than was the final bit that convinced me I should
go the chosen route. Yet again - I can be talked into going the route
you outline via convincing arguments. Don't forget that we e.g.
deliberately poison the selector values in debug builds (see
hvm_invalidate_regs_fields()) - that poisoning would get undermined if
we wrote directly into the structure.

Jan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.