[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [xen master] x86/shadow: use __put_user() instead of __copy_to_user()
commit a8cd9b8aff93b5d55f126910dde77f90d973ac76 Author: Jan Beulich <jbeulich@xxxxxxxx> AuthorDate: Tue Jan 26 14:13:18 2021 +0100 Commit: Jan Beulich <jbeulich@xxxxxxxx> CommitDate: Tue Jan 26 14:13:18 2021 +0100 x86/shadow: use __put_user() instead of __copy_to_user() In a subsequent patch I would almost have broken the logic here, if I hadn't happened to read through the comment at the top of safe_write_entry(): __copy_from_user() does not provide a guarantee shadow_write_entries() requires - it's only an optimization that it makes use of __put_user_size() for certain sizes. Use __put_user() directly, which does expand to a single (memory accessing) insn. Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> Reviewed-by: Tim Deegan <tim@xxxxxxx> --- xen/arch/x86/mm/shadow/multi.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c index d24ccde035..da46eae835 100644 --- a/xen/arch/x86/mm/shadow/multi.c +++ b/xen/arch/x86/mm/shadow/multi.c @@ -776,9 +776,9 @@ shadow_write_entries(void *d, void *s, int entries, mfn_t mfn) /* Because we mirror access rights at all levels in the shadow, an * l2 (or higher) entry with the RW bit cleared will leave us with * no write access through the linear map. - * We detect that by writing to the shadow with copy_to_user() and + * We detect that by writing to the shadow with __put_user() and * using map_domain_page() to get a writeable mapping if we need to. */ - if ( __copy_to_user(d, d, sizeof (unsigned long)) != 0 ) + if ( __put_user(*dst, dst) ) { perfc_incr(shadow_linear_map_failed); map = map_domain_page(mfn); -- generated by git-patchbot for /home/xen/git/xen.git#master
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |