[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4] altp2m: Allow the hostp2m entries to be of type p2m_ram_shared



On 26/05/16 04:55, Tamas K Lengyel wrote:
> Move sharing locks above altp2m to avoid locking order violation. Allow
> applying altp2m mem_access settings when the hostp2m entries are shared.
> Also, do not trigger PoD for hostp2m when setting altp2m mem_access to be
> in-line with non-altp2m mem_access path. Also allow gfn remapping with
> p2m_ram_shared type gfns in altp2m views.
> 
> When using mem_sharing in combination with altp2m, unsharing events overwrite
> altp2m entries with that of the hostp2m (both remapped entries and mem_access
> settings). User should take precaution to not share pages where this behavior
> is undesired.

I'm afraid this is not acceptable.  How could this ever be even remotely
usable?  If this is a necessary side effect of sharing, then I think the
original functionality, of un-sharing when setting an altp2m entry is
the only option (and not allowing sharing to happen when an altp2m is
present for a particular gfn).

Hmm, but actually this also brings up another tricky point: In an altp2m
you can change the mfn which backs a gfn.  This would need to be handled
properly in the reverse map, which it doesn't look like it is at the moment.

On the whole, I think if you're going to allow a single gfn to be
simultaneously shared and allow an altp2m for it, you're going to need
to do a lot more work.

(Sorry for not catching a lot of this before...)

 -George

> 
> Signed-off-by: Tamas K Lengyel <tamas@xxxxxxxxxxxxx>
> ---
> Cc: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
> Cc: Jan Beulich <jbeulich@xxxxxxxx>
> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> 
> v4: Resolve locking problem by moving sharing locks above altp2m locks
> ---
>  xen/arch/x86/mm/mm-locks.h | 30 +++++++++++++++---------------
>  xen/arch/x86/mm/p2m.c      | 10 +++++-----
>  2 files changed, 20 insertions(+), 20 deletions(-)
> 
> diff --git a/xen/arch/x86/mm/mm-locks.h b/xen/arch/x86/mm/mm-locks.h
> index 086c8bb..74fdfc1 100644
> --- a/xen/arch/x86/mm/mm-locks.h
> +++ b/xen/arch/x86/mm/mm-locks.h
> @@ -242,6 +242,21 @@ declare_mm_lock(nestedp2m)
>  
>  declare_mm_rwlock(p2m);
>  
> +/* Sharing per page lock
> + *
> + * This is an external lock, not represented by an mm_lock_t. The memory
> + * sharing lock uses it to protect addition and removal of (gfn,domain)
> + * tuples to a shared page. We enforce order here against the p2m lock,
> + * which is taken after the page_lock to change the gfn's p2m entry.
> + *
> + * The lock is recursive because during share we lock two pages. */
> +
> +declare_mm_order_constraint(per_page_sharing)
> +#define page_sharing_mm_pre_lock()   
> mm_enforce_order_lock_pre_per_page_sharing()
> +#define page_sharing_mm_post_lock(l, r) \
> +        mm_enforce_order_lock_post_per_page_sharing((l), (r))
> +#define page_sharing_mm_unlock(l, r) mm_enforce_order_unlock((l), (r))
> +
>  /* Alternate P2M list lock (per-domain)
>   *
>   * A per-domain lock that protects the list of alternate p2m's.
> @@ -287,21 +302,6 @@ declare_mm_rwlock(altp2m);
>  #define p2m_locked_by_me(p)   mm_write_locked_by_me(&(p)->lock)
>  #define gfn_locked_by_me(p,g) p2m_locked_by_me(p)
>  
> -/* Sharing per page lock
> - *
> - * This is an external lock, not represented by an mm_lock_t. The memory
> - * sharing lock uses it to protect addition and removal of (gfn,domain)
> - * tuples to a shared page. We enforce order here against the p2m lock,
> - * which is taken after the page_lock to change the gfn's p2m entry.
> - *
> - * The lock is recursive because during share we lock two pages. */
> -
> -declare_mm_order_constraint(per_page_sharing)
> -#define page_sharing_mm_pre_lock()   
> mm_enforce_order_lock_pre_per_page_sharing()
> -#define page_sharing_mm_post_lock(l, r) \
> -        mm_enforce_order_lock_post_per_page_sharing((l), (r))
> -#define page_sharing_mm_unlock(l, r) mm_enforce_order_unlock((l), (r))
> -
>  /* PoD lock (per-p2m-table)
>   * 
>   * Protects private PoD data structs: entry and cache
> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index 9b19769..dc97082 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -1768,10 +1768,10 @@ int p2m_set_altp2m_mem_access(struct domain *d, 
> struct p2m_domain *hp2m,
>      if ( !mfn_valid(mfn) )
>      {
>          mfn = hp2m->get_entry(hp2m, gfn_l, &t, &old_a,
> -                              P2M_ALLOC | P2M_UNSHARE, &page_order, NULL);
> +                              0, &page_order, NULL);
>  
>          rc = -ESRCH;
> -        if ( !mfn_valid(mfn) || t != p2m_ram_rw )
> +        if ( !mfn_valid(mfn) || (t != p2m_ram_rw && t != p2m_ram_shared) )
>              return rc;
>  
>          /* If this is a superpage, copy that first */
> @@ -2542,9 +2542,9 @@ int p2m_change_altp2m_gfn(struct domain *d, unsigned 
> int idx,
>      if ( !mfn_valid(mfn) )
>      {
>          mfn = hp2m->get_entry(hp2m, gfn_x(old_gfn), &t, &a,
> -                              P2M_ALLOC | P2M_UNSHARE, &page_order, NULL);
> +                              P2M_ALLOC, &page_order, NULL);
>  
> -        if ( !mfn_valid(mfn) || t != p2m_ram_rw )
> +        if ( !mfn_valid(mfn) || (t != p2m_ram_rw && t != p2m_ram_shared) )
>              goto out;
>  
>          /* If this is a superpage, copy that first */
> @@ -2567,7 +2567,7 @@ int p2m_change_altp2m_gfn(struct domain *d, unsigned 
> int idx,
>      if ( !mfn_valid(mfn) )
>          mfn = hp2m->get_entry(hp2m, gfn_x(new_gfn), &t, &a, 0, NULL, NULL);
>  
> -    if ( !mfn_valid(mfn) || (t != p2m_ram_rw) )
> +    if ( !mfn_valid(mfn) || (t != p2m_ram_rw && t != p2m_ram_shared) )
>          goto out;
>  
>      if ( !ap2m->set_entry(ap2m, gfn_x(old_gfn), mfn, PAGE_ORDER_4K, t, a,
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.