[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/HVM: refine when to send mapcache invalidation request to qemu



I came across the same issue when QEMU was holding an extra reference
to a page removed from p2m, via XENMEM_add_to_physmap. Please tell me
if I am talking nonsense since my knowledge around QEMU invalidation is
limited.

On Mon, 2020-09-28 at 12:44 +0200, Jan Beulich wrote:
> For one it was wrong to send the request only upon a completed
> hypercall: Even if only part of it completed before getting
> preempted,
> invalidation ought to occur. Therefore fold the two return
> statements.
> 
> And then XENMEM_decrease_reservation isn't the only means by which
> pages
> can get removed from a guest, yet all removals ought to be signaled
> to
> qemu. Put setting of the flag into the central p2m_remove_page()
> underlying all respective hypercalls.

Sounds good. I know this still does not catch everything, but at least
a nice improvement from before.

> 
> Plus finally there's no point sending the request for the local
> domain
> when the domain acted upon is a different one. If anything that
> domain's
> qemu's mapcache may need invalidating, but it's unclear how useful
> this
> would be: That remote domain may not execute hypercalls at all, and
> hence may never make it to the point where the request actually gets
> issued. I guess the assumption is that such manipulation is not
> supposed
> to happen anymore once the guest has been started?

I may still want to set the invalidation signal to true even if the
domain acted on is not the local domain. I know the remote domain may
never reach the point to issue the invalidate, but it sounds to me that
the problem is not whether we should set the signal but whether we can
change where the signal is checked to make sure the point of issue can
be reliably triggered, and the latter can be done in a future patch.

> 
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> ---
> Putting the check in guest_physmap_remove_page() might also suffice,
> but then a separate is_hvm_domain() would need adding again.
> 
> --- a/xen/arch/x86/hvm/hypercall.c
> +++ b/xen/arch/x86/hvm/hypercall.c
> @@ -31,7 +31,6 @@
>  
>  static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>  {
> -    const struct vcpu *curr = current;
>      long rc;
>  
>      switch ( cmd & MEMOP_CMD_MASK )
> @@ -41,14 +40,11 @@ static long hvm_memory_op(int cmd, XEN_G
>          return -ENOSYS;
>      }
>  
> -    if ( !curr->hcall_compat )
> +    if ( !current->hcall_compat )
>          rc = do_memory_op(cmd, arg);
>      else
>          rc = compat_memory_op(cmd, arg);
>  
> -    if ( (cmd & MEMOP_CMD_MASK) == XENMEM_decrease_reservation )
> -        curr->domain->arch.hvm.qemu_mapcache_invalidate = true;
> -
>      return rc;
>  }
>  
> @@ -326,14 +322,11 @@ int hvm_hypercall(struct cpu_user_regs *
>  
>      HVM_DBG_LOG(DBG_LEVEL_HCALL, "hcall%lu -> %lx", eax, regs->rax);
>  
> -    if ( curr->hcall_preempted )
> -        return HVM_HCALL_preempted;
> -
>      if ( unlikely(currd->arch.hvm.qemu_mapcache_invalidate) &&
>           test_and_clear_bool(currd-
> >arch.hvm.qemu_mapcache_invalidate) )
>          send_invalidate_req();
>  
> -    return HVM_HCALL_completed;
> +    return curr->hcall_preempted ? HVM_HCALL_preempted :
> HVM_HCALL_completed;
>  }
>  
>  /*
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -812,6 +812,9 @@ p2m_remove_page(struct p2m_domain *p2m,
>          }
>      }
>  
> +    if ( p2m->domain == current->domain )
> +        p2m->domain->arch.hvm.qemu_mapcache_invalidate = true;
> +

So my comment above makes me want to remove the condition.

>      return p2m_set_entry(p2m, gfn, INVALID_MFN, page_order,
> p2m_invalid,
>                           p2m->default_access);
>  }
> 

Hongyan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.