[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [Patch] Disallow SMEP for PV guest
On Thu, Jun 02, 2011 at 12:01:33AM +0800, Li, Xin wrote: > > >>> This patch disallows SMEP for PV guest. > > >> > > >> What are the reasons for it? What do we gain from it? > > > > > > X86_64 pv guests runs in ring3, which SMEP doesn't apply to. > > > > > > Kernel supports SMEP will set it thru writing to CR4, probably we can > > > silently > > > ignore such writes from PV guests, but better to not let guest see it. > > > > Well, maybe. But if you hide the feature from the guest in CPUID then you > > should also hide it in CR4, which will involve some messing with > > real_cr4_to_pv_guest_cr4() and pv_guest_cr4_to_real_cr4(), in a fairly > > obvious manner. And you should hide it in dom0's CPUID too. > > People are very interested in this feature :). Hmm, can you give more details on what SMEP tries to do? The very interested sounds like I should be aware of this but .. ah here it is: SMEP prevents the CPU in kernel-mode to jump to an executable page that does not have the kernel/system flag set in the pte. This prevents the kernel from executing user-space code accidentally or maliciously, so it for example prevents kernel exploits from jumping to specially prepared user-mode shell code. The violation will cause page fault #PF and will have error code identical to XD violation. > > As it can't apply to ring 3, x86_64 pv guest kernel accessing user code won't > trigger instruction fetch page fault. thus it makes no sense to use it here. > > Definitely we should hide it from dom0 kernel. The change should be in Xen > or pvops dom0? Ugh, if have a patch against the paravirt kernel that would only cover the 3.1 kernel. So you could still run with the SMEP enabled with the older kernels. Sounds like a candidate for Xen hypervisor? _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |