[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [V3 PATCH 7/9] x86/hvm: pkeys, add pkeys support for guest_walk_tables

On 16/12/15 16:28, Tim Deegan wrote:
> Hi,
> At 15:36 +0000 on 16 Dec (1450280191), George Dunlap wrote:
>> (hvm_fetch_from_guest_virt() seems to only set PFEC_insn_fetch if nx or
>> smep are enabled in the guest.  This seems inconsistent to me with the
>> treatment of PFEC_reserved_bit: it seems like
>> hvm_fetch_from_guest_virt() should always pass in PFEC_insn_fetch,
>> particularly as guest_walk_tables() will already gate the checks based
>> on whether nx or smep is enabled in the guest.  Tim, you know of any
>> reason for this?)
> This code is following the hardware, IIRC: on real CPUs without NX
> enabled, pagefault error codes will not have that bit set even when
> caused by instruction fetches.  Looking at the SDM (3A.4.7) the
> current rules are for that bit to be set iff ((PAE && NXE) || SMEP).

Right, but I think your answer points to the fundamental problem with
the way this code has been written.  The pfec value passed to
gva_to_gfn() is used for two separate functions: 1) to tell
guest_walk_tables() what kind of access is happening, so that it can
check the appropriate bits 2) to be used as a pfec value to pass back to
the guest when delivering a page fault.

These two functions should be clearly separated, but right now they're
not: hvm_fetch_from_guest_virt() is "pre-clearing" PFEC_insn_fetch, so
that guest_walk_tables() doesn't even know that an instruction fetch is
happening; as such, it's likely that the pkey code in this series will
DTWT (fault on an instruction fetch) if pkeys are enabled but nx and
smep are not.

Really those should be two separate parameters, one in and one out; and
it should be gva_to_gfn()'s job to translate the missing bits from
guest_walk_tables() into a pfec value suitable to use when injecting a
fault.  (Or potentially guest_walk_tables()'s job.)


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.