[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 2/3] xen/privcmd: Add IOCTL_PRIVCMD_DM_OP



> -----Original Message-----
> From: Boris Ostrovsky [mailto:boris.ostrovsky@xxxxxxxxxx]
> Sent: 10 February 2017 16:18
> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx;
> linux-kernel@xxxxxxxxxxxxxxx
> Cc: Juergen Gross <jgross@xxxxxxxx>
> Subject: Re: [PATCH v2 2/3] xen/privcmd: Add IOCTL_PRIVCMD_DM_OP
> 
> On 02/10/2017 09:24 AM, Paul Durrant wrote:
> > +static long privcmd_ioctl_dm_op(void __user *udata)
> > +{
> > +   struct privcmd_dm_op kdata;
> > +   struct privcmd_dm_op_buf *kbufs;
> > +   unsigned int nr_pages = 0;
> > +   struct page **pages = NULL;
> > +   struct xen_dm_op_buf *xbufs = NULL;
> > +   unsigned int i;
> > +   long rc;
> > +
> > +   if (copy_from_user(&kdata, udata, sizeof(kdata)))
> > +           return -EFAULT;
> > +
> > +   if (kdata.num == 0)
> > +           return 0;
> > +
> > +   /*
> > +    * Set a tolerable upper limit on the number of buffers
> > +    * without being overly restrictive, since we can't easily
> > +    * predict what future dm_ops may require.
> > +    */
> 
> I think this deserves its own macro since it really has nothing to do
> with page size, has it? Especially since you are referencing it again
> below too.
> 
> 
> > +   if (kdata.num * sizeof(*kbufs) > PAGE_SIZE)
> > +           return -E2BIG;
> > +
> > +   kbufs = kcalloc(kdata.num, sizeof(*kbufs), GFP_KERNEL);
> > +   if (!kbufs)
> > +           return -ENOMEM;
> > +
> > +   if (copy_from_user(kbufs, kdata.ubufs,
> > +                      sizeof(*kbufs) * kdata.num)) {
> > +           rc = -EFAULT;
> > +           goto out;
> > +   }
> > +
> > +   for (i = 0; i < kdata.num; i++) {
> > +           if (!access_ok(VERIFY_WRITE, kbufs[i].uptr,
> > +                          kbufs[i].size)) {
> > +                   rc = -EFAULT;
> > +                   goto out;
> > +           }
> > +
> > +           nr_pages += DIV_ROUND_UP(
> > +                   offset_in_page(kbufs[i].uptr) + kbufs[i].size,
> > +                   PAGE_SIZE);
> > +   }
> > +
> > +   /*
> > +    * Again, set a tolerable upper limit on the number of pages
> > +    * needed to lock all the buffers without being overly
> > +    * restrictive, since we can't easily predict the size of
> > +    * buffers future dm_ops may use.
> > +    */
> 
> OTOH, these two cases describe different types of copying (the first one
> is for buffer descriptors and the second is for buffers themselves). And
> so should they be limited by the same value?
> 

I think there needs to be some limit and limiting the allocation to a page was 
the best I came up with. Can you think of a better one?

> > +   if (nr_pages * sizeof(*pages) > PAGE_SIZE) {
> > +           rc = -E2BIG;
> > +           goto out;
> > +   }
> > +
> > +   pages = kcalloc(nr_pages, sizeof(*pages), GFP_KERNEL);
> > +   if (!pages) {
> > +           rc = -ENOMEM;
> > +           goto out;
> > +   }
> > +
> > +   xbufs = kcalloc(kdata.num, sizeof(*xbufs), GFP_KERNEL);
> > +   if (!xbufs) {
> > +           rc = -ENOMEM;
> > +           goto out;
> > +   }
> > +
> > +   rc = lock_pages(kbufs, kdata.num, pages, nr_pages);
> 
> 
> Aren't those buffers already locked (as Andrew mentioned)? They are
> mmapped with MAP_LOCKED.

No, they're not. The new libxendevicemodel code I have does not make any use of 
xencall or guest handles when privcmd supports the DM_OP ioctl, so the caller 
buffers will not be locked.

> 
> And I also wonder whether we need to take rlimit(RLIMIT_MEMLOCK) into
> account.
> 

Maybe. I'll look at that.

  Paul

> -boris
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.