[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] ARM fixes for my improved privcmd patch.
On 19/12/12 12:22, Ian Campbell wrote: Yes, but the ONLY error that is "acceptable" (as in, doesn't lead to the calling code revoking the mapping and returning an error) is ENOENT.On Wed, 2012-12-19 at 12:10 +0000, Mats Petersson wrote:+ only likely to return EFAULT or some other "things are very + bad" error code, which the rest of the calling code won't + be able to fix up. So we just exit with the error we got.It expect it is more important to accumulate the individual errors from remap_pte_fn into err_ptr.Yes, but since that exits on error with EFAULT, the calling code won't "accept" the errors, and thus the whole house of cards fall apart at this point. There should probably be a task to fix this up properly, hence the comment. But right now, any error besides ENOENT is "unacceptable" by the callers of this code. If you want me to add this to the comment, I'm happy to. But as long as remap_pte_fn returns EFAULT on error, nothing will work after an error.Are you sure? privcmd.c has some special casing for ENOENT but it looks like it should just pass through other errors back to userspace. In any case surely this needs fixing? On the X86 side err_ptr is the result of the mmupdate hypercall which can already be other than ENOENT, including EFAULT, ESRCH etc. Or at least, that's how I believe it should SHOULD be - since only ENOENT is a "success" error code, anything else pretty much means that the operation requested didn't work properly. If you are aware of any use-case where EFAULT, ESRCH or other error codes would still result in a valid, usable memory mapping - I have a fair understanding of the xc_* code, and although I may not know every piece of that code, I'm fairly certainly that is the expected behaviour. -- Mats Ian. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |