[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Ping: [PATCH 3/3] memory: restrict XENMEM_remove_from_physmap to translated guests



>>> On 05.03.19 at 14:28,  wrote:
> The commit re-introducing it (14eb3b41d0 ["xen: reinstate previously
> unused XENMEM_remove_from_physmap hypercall"]) as well as the one having
> originally introduced it (d818f3cb7c ["hvm: Use main memory for video
> memory"]) and the one then purging it again (78c3097e4f ["Remove unused
> XENMEM_remove_from_physmap"]) make clear that this operation is intended
> for use on HVM (i.e. translated) guests only. Restrict it at least as
> much, because for PV guests documentation (in the public header) does
> not even match the implementation: It talks about GPFN as input, but
> get_page_from_gfn() assumes a GMFN in the non-translated case (and hands
> back the value passed in).
> 
> Also lift the check in XENMEM_add_to_physmap{,_batch} handling up
> directly into top level hypercall handling, and clarify things in the
> public header accordingly.
> 
> Take the liberty and also replace a pointless use of "current" with a
> more efficient use of an existing local variable (or function parameter
> to be precise).
> 
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

Andrew, Goerge - any chance of getting an ack for the pretty simple
x86-specific code adjustment?

Jan

> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -4470,9 +4470,6 @@ int xenmem_add_to_physmap_one(
>      mfn_t mfn = INVALID_MFN;
>      p2m_type_t p2mt;
>  
> -    if ( !paging_mode_translate(d) )
> -        return -EACCES;
> -
>      switch ( space )
>      {
>          case XENMAPSPACE_shared_info:
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -815,6 +815,8 @@ int xenmem_add_to_physmap(struct domain
>      long rc = 0;
>      union xen_add_to_physmap_batch_extra extra;
>  
> +    ASSERT(paging_mode_translate(d));
> +
>      if ( xatp->space != XENMAPSPACE_gmfn_foreign )
>          extra.res0 = 0;
>      else
> @@ -997,12 +999,15 @@ static int get_reserved_device_memory(xe
>  
>  static long xatp_permission_check(struct domain *d, unsigned int space)
>  {
> +    if ( !paging_mode_translate(d) )
> +        return -EACCES;
> +
>      /*
>       * XENMAPSPACE_dev_mmio mapping is only supported for hardware Domain
>       * to map this kind of space to itself.
>       */
>      if ( (space == XENMAPSPACE_dev_mmio) &&
> -         (!is_hardware_domain(current->domain) || (d != current->domain)) )
> +         (!is_hardware_domain(d) || (d != current->domain)) )
>          return -EACCES;
>  
>      return xsm_add_to_physmap(XSM_TARGET, current->domain, d);
> @@ -1386,7 +1391,9 @@ long do_memory_op(unsigned long cmd, XEN
>          if ( d == NULL )
>              return -ESRCH;
>  
> -        rc = xsm_remove_from_physmap(XSM_TARGET, curr_d, d);
> +        rc = paging_mode_translate(d)
> +             ? xsm_remove_from_physmap(XSM_TARGET, curr_d, d)
> +             : -EACCES;
>          if ( rc )
>          {
>              rcu_unlock_domain(d);
> --- a/xen/include/public/memory.h
> +++ b/xen/include/public/memory.h
> @@ -231,7 +231,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_machphys_map
>  
>  /*
>   * Sets the GPFN at which a particular page appears in the specified 
> guest's
> - * pseudophysical address space.
> + * physical address space (translated guests only).
>   * arg == addr of xen_add_to_physmap_t.
>   */
>  #define XENMEM_add_to_physmap      7
> @@ -298,7 +298,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_add_to_physm
>  
>  /*
>   * Unmaps the page appearing at a particular GPFN from the specified 
> guest's
> - * pseudophysical address space.
> + * physical address space (translated guests only).
>   * arg == addr of xen_remove_from_physmap_t.
>   */
>  #define XENMEM_remove_from_physmap      15
> 
> 
> 
> 





_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.