[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/2] x86/hap: Resolve mm-lock order violations when forking VMs with nested p2m



On 04.01.2021 18:41, Tamas K Lengyel wrote:
> @@ -893,13 +894,11 @@ static int nominate_page(struct domain *d, gfn_t gfn,
>          goto out;
>  
>      /*
> -     * Now that the page is validated, we can lock it. There is no
> -     * race because we're holding the p2m entry, so no one else
> -     * could be nominating this gfn.
> +     * Now that the page is validated, we can make it shared. There is no 
> race
> +     * because we're holding the p2m entry, so no one else could be 
> nominating
> +     * this gfn & and it is evidently not yet shared with any other VM, thus 
> we
> +     * don't need to take the mem_sharing_page_lock here.
>       */
> -    ret = -ENOENT;
> -    if ( !mem_sharing_page_lock(page) )
> -        goto out;

Isn't it too limited to mention just nomination in the comment?
Unsharing, for example, also needs to be prevented (or else
the tail of sharing could race with the beginning of unsharing).
The other paths looks to similarly hold the GFN, so the
reasoning is fine for them as well. Except maybe audit() - what
about races with that one?

> @@ -1214,7 +1212,7 @@ int __mem_sharing_unshare_page(struct domain *d,
>      p2m_type_t p2mt;
>      mfn_t mfn;
>      struct page_info *page, *old_page;
> -    int last_gfn;
> +    int last_gfn, rc = 0;

I consider this unhelpful: last_gfn really wants to be bool, and
hence wants to not share a declaration with rc. But you're the
maintainer ...

> @@ -1226,6 +1224,15 @@ int __mem_sharing_unshare_page(struct domain *d,
>          return 0;
>      }
>  
> +    /* lock nested p2ms to avoid lock-order violation */

Would you mind mentioning here the other side of the possible
violation, to aid the reader?

> +    if ( unlikely(nestedhvm_enabled(d)) )
> +    {
> +        int i;

unsigned int please (also further down), no matter that there may
be other similar examples of (bad) use of plain int.

> +        for ( i = 0; i < MAX_NESTEDP2M; i++ )
> +            p2m_lock(d->arch.nested_p2m[i]);

>From a brief scan, this is the first instance of acquiring all
nested p2m locks in one go. Ordering these by index is perhaps
fine, but I think this wants spelling out in e.g. mm-locks.h. Of
course the question is if you really need to go this far, i.e.
whether really all of the locks need holding. This is even more
so with p2m_flush_table_locked() not really looking to be a
quick operation, when there have many pages accumulated for it.
I.e. the overall lock holding time may turn out even more
excessive this way than it apparently already is.

> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -1598,8 +1598,17 @@ void
>  p2m_flush_nestedp2m(struct domain *d)
>  {
>      int i;
> +    struct p2m_domain *p2m;
> +
>      for ( i = 0; i < MAX_NESTEDP2M; i++ )
> -        p2m_flush_table(d->arch.nested_p2m[i]);
> +    {
> +        p2m = d->arch.nested_p2m[i];

Please move the declaration here, making this the variable's
initializer (unless line length constraints make the latter
undesirable).

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.