[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3 of 3] x86/emulation: No need to get_gfn on zero ram_gpa



At 15:34 -0400 on 24 Apr (1335281653), Andres Lagar-Cavilla wrote:
>  xen/arch/x86/hvm/emulate.c |  48 
> ++++++++++++++++++++++++---------------------
>  1 files changed, 26 insertions(+), 22 deletions(-)
> 
> 
> Signed-off-by: Andres Lagar-Cavilla <andres@xxxxxxxxxxxxxxxx>
> 
> diff -r 2ffc676120b8 -r 7a7443e80b99 xen/arch/x86/hvm/emulate.c
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -60,33 +60,37 @@ static int hvmemul_do_io(
>      ioreq_t *p = get_ioreq(curr);
>      unsigned long ram_gfn = paddr_to_pfn(ram_gpa);
>      p2m_type_t p2mt;
> -    mfn_t ram_mfn;
> +    mfn_t ram_mfn = _mfn(INVALID_MFN);
>      int rc;
>  
> -    /* Check for paged out page */
> -    ram_mfn = get_gfn_unshare(curr->domain, ram_gfn, &p2mt);
> -    if ( p2m_is_paging(p2mt) )
> -    {
> -        put_gfn(curr->domain, ram_gfn); 
> -        p2m_mem_paging_populate(curr->domain, ram_gfn);
> -        return X86EMUL_RETRY;
> -    }
> -    if ( p2m_is_shared(p2mt) )
> -    {
> -        put_gfn(curr->domain, ram_gfn); 
> -        return X86EMUL_RETRY;
> -    }
> -
> -    /* Maintain a ref on the mfn to ensure liveness. Put the gfn
> -     * to avoid potential deadlock wrt event channel lock, later. */
> -    if ( mfn_valid(mfn_x(ram_mfn)) )
> -        if ( !get_page(mfn_to_page(mfn_x(ram_mfn)),
> -             curr->domain) )
> +    /* Many callers pass a stub zero ram_gpa address. */
> +    if ( ram_gfn != 0 )

To safely gate on this, the 'stub' value needs to be made into something
that can't be confused with a real paddr, say, ragm_gpa == -1.
Otherwise we lose protection for IO where the target is in page zero. 

> +    { 
> +        /* Check for paged out page */
> +        ram_mfn = get_gfn_unshare(curr->domain, ram_gfn, &p2mt);
> +        if ( p2m_is_paging(p2mt) )
>          {
> -            put_gfn(curr->domain, ram_gfn);
> +            put_gfn(curr->domain, ram_gfn); 
> +            p2m_mem_paging_populate(curr->domain, ram_gfn);
>              return X86EMUL_RETRY;
>          }
> -    put_gfn(curr->domain, ram_gfn);
> +        if ( p2m_is_shared(p2mt) )
> +        {
> +            put_gfn(curr->domain, ram_gfn); 
> +            return X86EMUL_RETRY;
> +        }
> +
> +        /* Maintain a ref on the mfn to ensure liveness. Put the gfn
> +         * to avoid potential deadlock wrt event channel lock, later. */
> +        if ( mfn_valid(mfn_x(ram_mfn)) )
> +            if ( !get_page(mfn_to_page(mfn_x(ram_mfn)),
> +                 curr->domain) )
> +            {
> +                put_gfn(curr->domain, ram_gfn);
> +                return X86EMUL_RETRY;
> +            }
> +        put_gfn(curr->domain, ram_gfn);
> +    }
>  
>      /*
>       * Weird-sized accesses have undefined behaviour: we discard writes
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.