[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 7/9] x86/PVH: actually show Dom0's stacks from debug key '0'


  • To: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Thu, 23 Sep 2021 12:47:26 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=dWTt9Hzwf1WGDtN+3AbYodjdeBnm9iihDjOXU4Qg7/Q=; b=DomRDal7hziBIjgduZyZ/50onsKGcWBpcLQDTU4Yy24Mtds+9gykbZdJqHnB184qkcj6haeGc+T0qDuU70SqeSIbUeFbVg1KpjfIGEUrB8uMY4ch4aolx/6yiXwafgTiN+QkYV/Fm6FcRGMlfFBKcmKZpi/AfIjgOT4WbBZxIpTpVLiiVlfpIDg1E/6QqJ9tN89vQXj+j33N3ukGEH+xsyvkFCszocb6cWoZxo7pcgC0vreS1s92BP41BJ9j2NMNxIMSEle/jfBMoZH+Yh0wUdRfFeZSQOBilBN931u0IL5vSkJdUQozz6ZLQxCCedz896VW/sXqYxI53aBlLSmQmw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=cjB8HT/PJvGCNegWBAIwBEmhMqH2HTha3ktr9PMo5a9Bv/3/qZL3YVUEfwiCCMSexi80iot9wiJvqbIK/BQNWrIdCEe+wCQw6td9tzWe++vjzEp96bzpWpWm60sakkiJIX19qzHHza7FbgUDJ7GonHfBr4HCYyHr5VhtvJXNBFrtIjHBRM/cKjfu5KH65QCsNPcwwCnVNTJ3lXbp+EIVCXb5CyHH7fyuQhJN2W/DNKBQDq/V8hcnu2kgFWTiwkY25pqlt8DkhDyTlQYzq7tsOobLY9QwE8+g3fzBmlrUz7dbEarGEA13V8to7h4B3yimkk4rhfcBgaTTGpsLsDr3DQ==
  • Authentication-results: xen.org; dkim=none (message not signed) header.d=none;xen.org; dmarc=none action=none header.from=suse.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>
  • Delivery-date: Thu, 23 Sep 2021 10:47:37 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 23.09.2021 12:31, Roger Pau Monné wrote:
> On Tue, Sep 21, 2021 at 09:20:00AM +0200, Jan Beulich wrote:
>> --- a/xen/arch/x86/hvm/hvm.c
>> +++ b/xen/arch/x86/hvm/hvm.c
>> @@ -3408,6 +3408,15 @@ enum hvm_translation_result hvm_copy_fro
>>                        PFEC_page_present | pfec, pfinfo);
>>  }
>>  
>> +enum hvm_translation_result hvm_copy_from_vcpu_linear(
>> +    void *buf, unsigned long addr, unsigned int size, struct vcpu *v,
>> +    unsigned int pfec)
> 
> Even if your current use case doesn't need it, would it be worth
> adding a pagefault_info_t parameter?

I'd prefer to add such parameters only once they become necessary.

>> --- a/xen/arch/x86/traps.c
>> +++ b/xen/arch/x86/traps.c
>> @@ -364,6 +364,71 @@ static void show_guest_stack(struct vcpu
>>      printk("\n");
>>  }
>>  
>> +static void show_hvm_stack(struct vcpu *v, const struct cpu_user_regs *regs)
>> +{
>> +#ifdef CONFIG_HVM
>> +    unsigned long sp = regs->rsp, addr;
>> +    unsigned int i, bytes, words_per_line, pfec = PFEC_page_present;
>> +    struct segment_register ss, cs;
>> +
>> +    hvm_get_segment_register(v, x86_seg_ss, &ss);
>> +    hvm_get_segment_register(v, x86_seg_cs, &cs);
>> +
>> +    if ( hvm_long_mode_active(v) && cs.l )
>> +        i = 16, bytes = 8;
>> +    else
>> +    {
>> +        sp = ss.db ? (uint32_t)sp : (uint16_t)sp;
>> +        i = ss.db ? 8 : 4;
>> +        bytes = cs.db ? 4 : 2;
>> +    }
>> +
>> +    if ( bytes == 8 || (ss.db && !ss.base) )
>> +        printk("Guest stack trace from sp=%0*lx:", i, sp);
>> +    else
>> +        printk("Guest stack trace from ss:sp=%04x:%0*lx:", ss.sel, i, sp);
>> +
>> +    if ( !hvm_vcpu_virtual_to_linear(v, x86_seg_ss, &ss, sp, bytes,
>> +                                     hvm_access_read, &cs, &addr) )
>> +    {
>> +        printk(" Guest-inaccessible memory\n");
>> +        return;
>> +    }
>> +
>> +    if ( ss.dpl == 3 )
>> +        pfec |= PFEC_user_mode;
>> +
>> +    words_per_line = stack_words_per_line * (sizeof(void *) / bytes);
>> +    for ( i = 0; i < debug_stack_lines * words_per_line; )
>> +    {
>> +        unsigned long val = 0;
>> +
>> +        if ( (addr ^ (addr + bytes - 1)) & PAGE_SIZE )
>> +            break;
>> +
>> +        if ( !(i++ % words_per_line) )
>> +            printk("\n  ");
>> +
>> +        if ( hvm_copy_from_vcpu_linear(&val, addr, bytes, v,
>> +                                       pfec) != HVMTRANS_okay )
> 
> I think I'm confused, but what about guests without paging enabled?
> Don't you need to use hvm_copy_from_guest_phys (likely transformed
> into hvm_copy_from_vcpu_phys)?

__hvm_copy() calls hvm_translate_get_page() telling it whether the
input is a linear or physical address. hvm_translate_get_page() will
use paging_gva_to_gfn() in this case. The HAP backing function
changes when the guest {en,dis}ables paging, while shadow code deals
with paging disabled by installing an identity mapping
(d->arch.paging.shadow.unpaged_pagetable) which it would then end up
(needlessly) walking.

It really is - afaict - intentional for callers to not have to deal
with the special case.

>> @@ -663,14 +728,22 @@ void vcpu_show_execution_state(struct vc
>>      }
>>  #endif
>>  
>> -    /* Prevent interleaving of output. */
>> -    flags = console_lock_recursive_irqsave();
>> +    /*
>> +     * Prevent interleaving of output if possible. For HVM we can't do so, 
>> as
>> +     * the necessary P2M lookups involve locking, which has to occur with 
>> IRQs
>> +     * enabled.
>> +     */
>> +    if ( !is_hvm_vcpu(v) )
>> +        flags = console_lock_recursive_irqsave();
>>  
>>      vcpu_show_registers(v);
>> -    if ( guest_kernel_mode(v, &v->arch.user_regs) )
>> +    if ( is_hvm_vcpu(v) )
>> +        show_hvm_stack(v, &v->arch.user_regs);
> 
> Would it make sense to unlock in show_hvm_stack, and thus keep the
> printing of vcpu_show_registers locked even when in HVM context?

Perhaps not _in_, but before calling it, yet - why not.

> TBH I've never found the guest stack dump to be helpful for debugging
> purposes, but maybe others do.

I can't count how many times I did use the stack dumps of Dom0 to
actually pinpoint problems.

Jan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.