[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/3] xen/privcmd: Add IOCTL_PRIVCMD_DM_OP



>>> On 09.02.17 at 16:56, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 09/02/17 15:50, Boris Ostrovsky wrote:
>>
>>
>> On 02/09/2017 09:27 AM, Paul Durrant wrote:
>>>> -----Original Message-----
>>>> From: Paul Durrant [mailto:paul.durrant@xxxxxxxxxx]
>>>> Sent: 09 February 2017 14:18
>>>> To: xen-devel@xxxxxxxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx 
>>>> Cc: Paul Durrant <Paul.Durrant@xxxxxxxxxx>; Boris Ostrovsky
>>>> <boris.ostrovsky@xxxxxxxxxx>; Juergen Gross <jgross@xxxxxxxx>
>>>> Subject: [PATCH 2/3] xen/privcmd: Add IOCTL_PRIVCMD_DM_OP
>>>>
>>>> Recently a new dm_op[1] hypercall was added to Xen to provide a
>>>> mechanism
>>>> for restricting device emulators (such as QEMU) to a limited set of
>>>> hypervisor operations, and being able to audit those operations in the
>>>> kernel of the domain in which they run.
>>>>
>>>> This patch adds IOCTL_PRIVCMD_DM_OP as gateway for
>>>> __HYPERVISOR_dm_op,
>>>> bouncing the callers buffers through kernel memory to allow the address
>>>> ranges to be audited (and negating the need to bounce through locked
>>>> memory in user-space).
>>>
>>> Actually, it strikes me (now that I've posted the patch) that I
>>> should probably just mlock the user buffers rather than bouncing them
>>> through kernel... Anyway, I'd still appreciate review on other
>>> aspects of the patch.
>>
>>
>> Are you suggesting that the caller (user) mlocks the buffers?
> 
> Doesn't libxc already use the hypercall buffer API for each of the buffers?
> 
> The kernel oughtn’t to need to do anything special to the user pointers
> it has, other than call access_ok() on them.

And translate 32-bit layout to 64-bit for a compat caller.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.