[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 1/2] x86/mem-sharing: Bulk mem-sharing entire domains




On Jun 14, 2016 10:33, "Konrad Rzeszutek Wilk" <konrad.wilk@xxxxxxxxxx> wrote:
>
> > diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
> > index a522423..ba06fb0 100644
> > --- a/xen/arch/x86/mm/mem_sharing.c
> > +++ b/xen/arch/x86/mm/mem_sharing.c
> > @@ -1294,6 +1294,54 @@ int relinquish_shared_pages(struct domain *d)
> >      return rc;
> >  }
> >
> > +static int bulk_share(struct domain *d, struct domain *cd, unsigned long limit,
> > +                      struct mem_sharing_op_bulk *bulk)
> > +{
> > +    int rc = 0;
> > +    shr_handle_t sh, ch;
> > +
> > +    while( limit > bulk->start )
>
> You are missing a space there.

Ack.

> > +    {
> > +        /*
> > +         * We only break out if we run out of memory as individual pages may
> > +         * legitimately be unsharable and we just want to skip over those.
> > +         */
> > +        rc = mem_sharing_nominate_page(d, bulk->start, 0, &sh);
> > +        if ( rc == -ENOMEM )
> > +            break;
> > +        if ( !rc )
> > +        {
> > +            rc = mem_sharing_nominate_page(cd, bulk->start, 0, &ch);
> > +            if ( rc == -ENOMEM )
> > +                break;
> > +            if ( !rc )
> > +            {
> > +                /* If we get here this should be guaranteed to succeed. */
> > +                rc = mem_sharing_share_pages(d, bulk->start, sh,
> > +                                             cd, bulk->start, ch);
> > +                ASSERT(!rc);
> > +            }
> > +        }
> > +
> > +        /* Check for continuation if it's not the last iteration. */
> > +        if ( limit > ++bulk->start && hypercall_preempt_check() )
>
> I surprised the compiler didn't complain to you about lack of parenthesis.

This seems to be standard way to create continuation used in multiple places throughout Xen. I don't personally like it much but I guess it's better to be consistent.

>
> > +        {
> > +            rc = 1;
> > +            break;
> > +        }
> > +    }
> > +
> > +    /*
> > +     * We only propagate -ENOMEM as individual pages may fail with -EINVAL,
> > +     * and for bulk sharing we only care if -ENOMEM was encountered so we reset
> > +     * rc here.
> > +     */
> > +    if ( rc < 0 && rc != -ENOMEM )
> > +        rc = 0;
> > +
> > +    return rc;
> > +}
> > +
> >  int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
> >  {
> >      int rc;
> > @@ -1468,6 +1516,79 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
> >          }
> >          break;
> >
> > +        case XENMEM_sharing_op_bulk_share:
> > +        {
> > +            unsigned long max_sgfn, max_cgfn;
> > +            struct domain *cd;
> > +
> > +            rc = -EINVAL;
> > +            if( mso.u.bulk._pad[0] || mso.u.bulk._pad[1] || mso.u.bulk._pad[2] )
>
> The "if("..

Ack.

Thanks,
Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.