|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 1/2] x86/mem-sharing: Bulk mem-sharing entire domains
>>> On 12.05.16 at 17:25, <tamas@xxxxxxxxxxxxx> wrote:
> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -1294,6 +1294,43 @@ int relinquish_shared_pages(struct domain *d)
> return rc;
> }
>
> +static int bulk_share(struct domain *d, struct domain *cd, unsigned long max,
> + struct mem_sharing_op_bulk *bulk)
> +{
> + int rc;
> + shr_handle_t sh, ch;
> +
> + while( bulk->start <= max )
> + {
> + rc = mem_sharing_nominate_page(d, bulk->start, 0, &sh);
> + if ( rc == -ENOMEM )
> + break;
> + if ( !rc )
> + {
> + rc = mem_sharing_nominate_page(cd, bulk->start, 0, &ch);
> + if ( rc == -ENOMEM )
> + break;
If we get to this break, how will the caller know that the first
nomination succeeded but the second didn't? Or perhaps there
is some undo logic missing here?
> + if ( !rc )
> + mem_sharing_share_pages(d, bulk->start, sh, cd, bulk->start,
> ch);
You shouldn't be ignoring errors here.
> + }
> +
> + ++(bulk->start);
Pointless parentheses.
> + /* Check for continuation if it's not the last iteration. */
> + if ( bulk->start < max && hypercall_preempt_check() )
The loop head has <=; why < here?
> + {
> + rc = 1;
I'd recommend using -ERESTART here, as we do elsewhere.
> + break;
> + }
> + }
> +
> + /* We only propagate -ENOMEM so reset rc here */
> + if ( rc < 0 && rc != -ENOMEM )
> + rc = 0;
What's the rationale for discarding all other errors? At least the
patch description, but perhaps even the comment (which btw
is lacking a full stop) should be explaining this.
> @@ -1468,6 +1505,69 @@ int
> mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
> }
> break;
>
> + case XENMEM_sharing_op_bulk_share:
> + {
> + unsigned long max_sgfn, max_cgfn;
> + struct domain *cd;
> +
> + rc = -EINVAL;
> + if ( !mem_sharing_enabled(d) )
> + goto out;
> +
> + rc = rcu_lock_live_remote_domain_by_id(mso.u.bulk.client_domain,
> + &cd);
> + if ( rc )
> + goto out;
> +
> + rc = xsm_mem_sharing_op(XSM_DM_PRIV, d, cd, mso.op);
Either you pass XENMEM_sharing_op_share here, or you need to
update xen/xsm/flask/policy/access_vectors (even if it's only a
comment which needs updating).
That said - are this and the similar pre-existing XSM checks actually
correct? I.e. is one of the two domains here really controlling the
other? I would have expected that a tool stack domain initiates the
sharing between two domains it controls...
> + if ( rc )
> + {
> + rcu_unlock_domain(cd);
> + goto out;
> + }
> +
> + if ( !mem_sharing_enabled(cd) )
> + {
> + rcu_unlock_domain(cd);
> + rc = -EINVAL;
> + goto out;
> + }
> +
> + if ( !atomic_read(&d->pause_count) ||
> + !atomic_read(&cd->pause_count) )
> + {
> + rcu_unlock_domain(cd);
> + rc = -EINVAL;
> + goto out;
> + }
> +
> + max_sgfn = domain_get_maximum_gpfn(d);
> + max_cgfn = domain_get_maximum_gpfn(cd);
> +
> + if ( max_sgfn != max_cgfn || max_sgfn < mso.u.bulk.start )
Why would the two domains need to agree in their maximum
GPFN? There's nothing similar in this file so far. Nor does the
right side of the || match anything pre-existing...
> @@ -488,7 +489,18 @@ struct xen_mem_sharing_op {
> uint64_aligned_t client_gfn; /* IN: the client gfn */
> uint64_aligned_t client_handle; /* IN: handle to the client page
> */
> domid_t client_domain; /* IN: the client domain id */
> - } share;
> + } share;
> + struct mem_sharing_op_bulk { /* OP_BULK_SHARE */
> + uint64_aligned_t start; /* IN: start gfn. Set to 0 for
> + full deduplication. Field is
> + used internally and may
> change
> + when the hypercall returns.
> */
> + uint64_aligned_t shared; /* OUT: the number of gfns
> + that are shared after this
> + operation including pages
> + already shared before */
> + domid_t client_domain; /* IN: the client domain id */
> + } bulk;
Let's not repeat pre-existing mistakes: There is explicit padding
missing here, which then also ought to be checked to be zero on
input.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |