[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] x86/HVM: correct hvmemul_map_linear_addr() for multi-page case


  • To: Jan Beulich <JBeulich@xxxxxxxx>, xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Alexandru Stefan ISAILA <aisaila@xxxxxxxxxxxxxxx>
  • Date: Wed, 31 Jul 2019 11:26:59 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=bitdefender.com;dmarc=pass action=none header.from=bitdefender.com;dkim=pass header.d=bitdefender.com;arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=IlNHGvqc4/2hPFa0TanRdaMKQ+3AHNBl7MwzC2Wyzgk=; b=ZOh9UpEMANTjYT0WBf6wNS57L+7HHp2H/0JCC7IWXBi+mHs4VCGIwWvAYyAMOYi5FpYgqfT+gM0AJnVmdWbBnBT/FA2BdcALi4yK1gmMAUd1jlTpG0DZJbHASMNxSoR8BPopYyaWuJ6C9jICWi1KkuMRcI9WNg77VvzKfBNq0vB8DFasES68lX7JdB1+W8e1YWMEnPPF5Wwjll2Cf+E0qD6FfFcQsS0Fj+2jO4az503jZ1loYVt/f+TxSvVtZmDYilq/dpWKECQjnXmQJlXKWHqagFObzwt0I3l2/IQvK2Mivil4BH7+AqYn9/MycFVx54ex26RHUH4Jh4VbGfNFUA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=iTuM1KtvOlfo42xyxQRjuOh4sPjo8ErmfCS//IaUaaRapipApBEbNl9BK6sk9JvaZC+vgDGQWfLnrRoo4m8tq9/3RgW2RM2nBemP3TMsO4/jppv+yFNk30iSsdOspW2tMzXRu9VcXKkjPcM32ANmKpDgOGbSwGFDU9pGi3x8xlwTv55NmcTq20xGTBLFDIRxgYO1JGPGB2BbeIntRHgtU6cXyJpT8/QlOCA/Cs0n8tJRadznwXol4T7f/B+ZfYazjq2ZPzVF12LFTqaqWo1sWUM8YztbN6RSWguWNLZC6dSufxnBpfYeLT8qiJvDZxbBtMHKX3scesl5CJgqGpzZiA==
  • Authentication-results: spf=none (sender IP is ) smtp.mailfrom=aisaila@xxxxxxxxxxxxxxx;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Paul Durrant <paul.durrant@xxxxxxxxxx>
  • Delivery-date: Wed, 31 Jul 2019 11:27:06 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHVR5LczOe7ndU+fkK5TPVTLddx3w==
  • Thread-topic: [PATCH v2] x86/HVM: correct hvmemul_map_linear_addr() for multi-page case


On 13.09.2018 13:12, Jan Beulich wrote:
> The function does two translations in one go for a single guest access.
> Any failure of the first translation step (guest linear -> guest
> physical), resulting in #PF, ought to take precedence over any failure
> of the second step (guest physical -> host physical). Bail out of the
> loop early solely when translation produces HVMTRANS_bad_linear_to_gfn,
> and record the most relevant of perhaps multiple different errors
> otherwise. (The choice of ZERO_BLOCK_PTR as sentinel is arbitrary.)
> 
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

This is useful for adding new functionality to hvmemul_map_linear_addr()

Reviewed-by: Alexandru Isaila <aisaila@xxxxxxxxxxxxxxx>

> ---
> v2: Add comment (mapping table) and adjust update_map_err()
>      accordingly.
> 
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -532,6 +532,36 @@ static int hvmemul_do_mmio_addr(paddr_t
>   }
>   
>   /*
> + * Intended mapping, implemented without table lookup:
> + *
> + * -----------------------------------------
> + * | \ new |       |       |       |       |
> + * |   \   | OKAY  | NULL  | RETRY | UNHND |
> + * | err \ |       |       |       |       |
> + * -----------------------------------------
> + * | OKAY  | OKAY  | NULL  | RETRY | UNHND |
> + * -----------------------------------------
> + * | NULL  | NULL  | NULL  | RETRY | UNHND |
> + * -----------------------------------------
> + * | RETRY | RETRY | RETRY | RETRY | UNHND |
> + * -----------------------------------------
> + * | UNHND | UNHND | UNHND | UNHND | UNHND |
> + * -----------------------------------------
> + */
> +static void *update_map_err(void *err, void *new)
> +{
> +    if ( err == ZERO_BLOCK_PTR || err == ERR_PTR(~X86EMUL_OKAY) ||
> +         new == ERR_PTR(~X86EMUL_UNHANDLEABLE) )
> +        return new;
> +
> +    if ( new == ERR_PTR(~X86EMUL_OKAY) ||
> +         err == ERR_PTR(~X86EMUL_UNHANDLEABLE) )
> +        return err;
> +
> +    return err ?: new;
> +}
> +
> +/*
>    * Map the frame(s) covering an individual linear access, for writeable
>    * access.  May return NULL for MMIO, or ERR_PTR(~X86EMUL_*) for other 
> errors
>    * including ERR_PTR(~X86EMUL_OKAY) for write-discard mappings.
> @@ -544,7 +574,7 @@ static void *hvmemul_map_linear_addr(
>       struct hvm_emulate_ctxt *hvmemul_ctxt)
>   {
>       struct vcpu *curr = current;
> -    void *err, *mapping;
> +    void *err = ZERO_BLOCK_PTR, *mapping;
>       unsigned int nr_frames = ((linear + bytes - !!bytes) >> PAGE_SHIFT) -
>           (linear >> PAGE_SHIFT) + 1;
>       unsigned int i;
> @@ -600,27 +630,28 @@ static void *hvmemul_map_linear_addr(
>               goto out;
>   
>           case HVMTRANS_bad_gfn_to_mfn:
> -            err = NULL;
> -            goto out;
> +            err = update_map_err(err, NULL);
> +            continue;
>   
>           case HVMTRANS_gfn_paged_out:
>           case HVMTRANS_gfn_shared:
> -            err = ERR_PTR(~X86EMUL_RETRY);
> -            goto out;
> +            err = update_map_err(err, ERR_PTR(~X86EMUL_RETRY));
> +            continue;
>   
>           default:
> -            goto unhandleable;
> +            err = update_map_err(err, ERR_PTR(~X86EMUL_UNHANDLEABLE));
> +            continue;
>           }
>   
>           *mfn++ = page_to_mfn(page);
>   
>           if ( p2m_is_discard_write(p2mt) )
> -        {
> -            err = ERR_PTR(~X86EMUL_OKAY);
> -            goto out;
> -        }
> +            err = update_map_err(err, ERR_PTR(~X86EMUL_OKAY));
>       }
>   
> +    if ( err != ZERO_BLOCK_PTR )
> +        goto out;
> +
>       /* Entire access within a single frame? */
>       if ( nr_frames == 1 )
>           mapping = map_domain_page(hvmemul_ctxt->mfn[0]);
> @@ -639,6 +670,7 @@ static void *hvmemul_map_linear_addr(
>       return mapping + (linear & ~PAGE_MASK);
>   
>    unhandleable:
> +    ASSERT(err == ZERO_BLOCK_PTR);
>       err = ERR_PTR(~X86EMUL_UNHANDLEABLE);
>   
>    out:
> 
> 
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxxx
> https://lists.xenproject.org/mailman/listinfo/xen-devel
> 
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.