[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/HVM: correct hvmemul_map_linear_addr() for multi-page case


  • To: Jan Beulich <JBeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Wed, 30 Aug 2023 16:30:39 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=wrAntrUFS0PNER1y/IkKcmfhlTu6YHWWNlWMkeAMyPY=; b=d53GmECkc+gaR3eJfUJ7taJ3FZzbN2u3s+Wz7ZXa7c+H/7E9a7EFHHPTVfy1blJQbFzTNE1rWdPGiw1leopQX0jwoVpcvGsUzHEZGVhAUAtqlliHcFjlK4aOocHH5kHjTR49GrStyjiEjFesN3I0KGD/lbKMJIXTOc3zvKNbtmFUQzdxAd56Aj47r6uaP3Ie5eTYIHKPAXgvM/Muae3z/bLWMSo2buNI8JgQOMYiwuBk0my5z559P79LLCCXlB2Q3NOTwIEkgbAHlBnMjDGvEq6cy/dnwqoYfMNZzbncL0XMxIa/5HzEt4PktyVwo3KYSyuQIn1qKmu7H20RWGY0Hg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=oZXOy3RHKEJhEE9SlFMC5ix2CGhTdm6JgDnJniT7wv4KgGhEei00A1zaS9/qgJjGU+PBKXAY6ujmhd0ZO2PzX41+Jm+izUNJdeEncHV/vWN+NYkPU8m9JEX+2h9guQHA7N5JG+iUQdIG9PPdJUl12/vLkhZoOXgTF0aROz3lvv2kwvC+ZmbE/hwa8kMKuu0yAAJmCc42vMthPLYLhhe3pw3nfjaDalZOyhJ0TUvwhZzjWl/DW0SsQxAHWlWGhSAyh/CclAlU5e6AkuKvj6CZzpm8m6gyHe8c/tiRhRZVNPel33RkLBSsS3fvTQJhKm4wH1cHtrYeuNCUdI3PlamLIA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Paul Durrant <paul.durrant@xxxxxxxxxx>
  • Delivery-date: Wed, 30 Aug 2023 14:31:06 +0000
  • Ironport-data: A9a23:ouvXcKJA/SVoKt7VFE+Rw5QlxSXFcZb7ZxGr2PjKsXjdYENShjIGn 2JOCj2AOarfYGf3ftkkatng8khTsZDWzddjSgZlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws Jb5rta31GWNglaYCUpKrfrZwP9TlK6q4mhA7gdmPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/ jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5YAm1v9 uY6eAoIVSyShLiTxbGKQLdj05FLwMnDZOvzu1lG5BSAVbMMZ8+GRK/Ho9hFwD03m8ZCW+7EY NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/VvpTGLl2Sd05C0WDbRUsaNSshP2F6Ru 0rN/njjAwFcP9uaodaA2iv237KSwX2qBOr+EpW3scdQj0TO9FAfFSM7ane6q/W/0HKhDoc3x 0s8v3BGQbIJ3G6BQ8T5Xha4iGWZpRNaUN1Ve8Uq5QfIxqfK7gKxAmkfUiUHeNEgrNUxRzEhy hmOhdyBONB0mLicSHbY/LHLqzq3YHARNTVbPXRCShYZ6d7+po11lgjIUttoDK+yiJvyBC30x DeJ6iM5gt3/kPI26klyxnif6xrEm3QDZlRdCtn/No590j5EWQ==
  • Ironport-hdrordr: A9a23:kNbhkqGW4VvjuYg4pLqE7MeALOsnbusQ8zAXPhZKOHhom62j9/ xG885x6faZslwssRIb+OxoWpPufZqGz+8R3WB5B97LYOCBggaVxepZg7cKrQeNJ8VQnNQtsp uJ38JFeb7N5fkRt7eZ3DWF
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Wed, Sep 12, 2018 at 03:09:35AM -0600, Jan Beulich wrote:
> The function does two translations in one go for a single guest access.
> Any failure of the first translation step (guest linear -> guest
> physical), resulting in #PF, ought to take precedence over any failure
> of the second step (guest physical -> host physical).

If my understanding is correct, this is done so that any #PF that
would arise from the access is injected to the guest, regardless of
whether there might also be gfn -> mfn errors that would otherwise
prevent the #PF from being detected.

> Bail out of the
> loop early solely when translation produces HVMTRANS_bad_linear_to_gfn,
> and record the most relevant of perhaps multiple different errors
> otherwise. (The choice of ZERO_BLOCK_PTR as sentinel is arbitrary.)
> 
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> 
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -531,6 +531,20 @@ static int hvmemul_do_mmio_addr(paddr_t
>      return hvmemul_do_io_addr(1, mmio_gpa, reps, size, dir, df, ram_gpa);
>  }
>  
> +static void *update_map_err(void *err, void *new)
> +{
> +    if ( err == ZERO_BLOCK_PTR || err == ERR_PTR(~X86EMUL_OKAY) )
> +        return new;
> +
> +    if ( new == ERR_PTR(~X86EMUL_OKAY) )
> +        return err;
> +
> +    if ( err == ERR_PTR(~X86EMUL_RETRY) )
> +        return new;
> +
> +    return err;
> +}
> +
>  /*
>   * Map the frame(s) covering an individual linear access, for writeable
>   * access.  May return NULL for MMIO, or ERR_PTR(~X86EMUL_*) for other errors
> @@ -544,7 +558,7 @@ static void *hvmemul_map_linear_addr(
>      struct hvm_emulate_ctxt *hvmemul_ctxt)
>  {
>      struct vcpu *curr = current;
> -    void *err, *mapping;
> +    void *err = ZERO_BLOCK_PTR, *mapping;
>      unsigned int nr_frames = ((linear + bytes - !!bytes) >> PAGE_SHIFT) -
>          (linear >> PAGE_SHIFT) + 1;
>      unsigned int i;
> @@ -600,27 +614,28 @@ static void *hvmemul_map_linear_addr(
>              goto out;
>  
>          case HVMTRANS_bad_gfn_to_mfn:
> -            err = NULL;
> -            goto out;
> +            err = update_map_err(err, NULL);
> +            continue;
>  
>          case HVMTRANS_gfn_paged_out:
>          case HVMTRANS_gfn_shared:
> -            err = ERR_PTR(~X86EMUL_RETRY);
> -            goto out;
> +            err = update_map_err(err, ERR_PTR(~X86EMUL_RETRY));
> +            continue;
>  
>          default:
> -            goto unhandleable;
> +            err = update_map_err(err, ERR_PTR(~X86EMUL_UNHANDLEABLE));
> +            continue;
>          }
>  
>          *mfn++ = page_to_mfn(page);

I have to admit it find it weird that since now we don't exit the loop
when HVMTRANS_bad_gfn_to_mfn is returned, the item at mfn[0] might
point to the gfn -> mfn translation for the second half of the access.
AFAICT that would happen if the first half of the access fails to
translate with an error !HVMTRANS_bad_linear_to_gfn and the second
half is successful.

I guess it doesn't matter much, because the vmap below will be
skipped, might still be worth to add a comment.

In fact, if the first translation fails the following ones could use
the cheaper paging_gva_to_gfn(), as we no longer care to get the
underlying page, and just need to figure out whether the access would
trigger a #PF?

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.