[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [xen-unstable test] 123379: regressions - FAIL



>>> On 13.06.18 at 08:50, <jgross@xxxxxxxx> wrote:
> On 13/06/18 08:11, Jan Beulich wrote:
>> Teaching the privcmd driver of all
>> the indirections in hypercall request structures doesn't look very
>> attractive (or maintainable). Or are you thinking of the caller
>> providing sideband information describing the buffers involved,
>> perhaps along the lines of how dm_op was designed?
> 
> I thought about that, yes. libxencall already has all the needed data
> for that. Another possibility would be a dedicated ioctl for registering
> a hypercall buffer (or some of them).

I'm not sure that's an option: Is it legitimate (secure) to retain the
effects of get_user_pages() across system calls?

>> There's another option, but that has potentially severe drawbacks
>> too: Instead of returning -EFAULT on buffer access issues, we
>> could raise #PF on the very hypercall insn. Maybe something to
>> consider as an opt-in for PV/PVH, and as default for HVM.
> 
> Hmm, I'm not sure this will solve any problem. I'm not aware that it
> is considered good practice to just access a user buffer from kernel
> without using copyin()/copyout() when you haven't locked the page(s)
> via get_user_pages(), even if the buffer was mlock()ed. Returning
> -EFAULT is the right thing to do, I believe.

But we're talking about the very copyin()/copyout(), just that here
it's being amortized by doing the operation just once (in the
hypervisor). A #PF would arise from syscall buffer copyin()/copyout(),
and the suggestion was to produce the same effect for the squashed
operation. Perhaps we wouldn't want #PF to come back from ordinary
(kernel invoked) hypercalls, but ones relayed by privcmd are different
in many ways anyway (see the stac()/clac() pair around the actual
call, for example).

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.