[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 2/4] x86/hvm: Treat non-instruction fetch nested page faults also as read violations



but doing so just moves from one incomplete solution (where read-modify-write is not treated as read-violation) to another incomplete solution (where all writes are treated read-violation). If there’s actual usage relying on accurate read-violation information, either solution doesn’t work. So I don’t see the value of this change.

 

From: Tamas Lengyel [mailto:tamas.lengyel@xxxxxxxxxxxx]
Sent: Thursday, August 14, 2014 1:02 AM
To: Tian, Kevin
Cc: xen-devel@xxxxxxxxxxxxx; ian.jackson@xxxxxxxxxxxxx; stefano.stabellini@xxxxxxxxxxxxx; ian.campbell@xxxxxxxxxx; boris.ostrovsky@xxxxxxxxxx; suravee.suthikulpanit@xxxxxxx; Aravind.Gopalakrishnan@xxxxxxx; Nakajima, Jun; Dong, Eddie; JBeulich@xxxxxxxx
Subject: Re: [PATCH v6 2/4] x86/hvm: Treat non-instruction fetch nested page faults also as read violations

 

On Thu, Aug 14, 2014 at 2:31 AM, Tian, Kevin <kevin.tian@xxxxxxxxx> wrote:

> From: Tamas K Lengyel [mailto:tamas.lengyel@xxxxxxxxxxxx]
> Sent: Monday, August 11, 2014 7:49 AM

>
> As pointed out by Jan Beulich in
> http://lists.xen.org/archives/html/xen-devel/2014-08/msg01269.html:
> "Read-modify-write instructions absolutely need to be treated as read
> accesses, yet hardware doesn't guarantee to tell us so (they may surface as
> just write accesses)." This patch addresses the issue in both the VMX and the
> SVM side.
>
> VMX: Treat all non-instruction fetch violations as read violations (in addition to
> those that were already reported as read violations).

so as a result all the write violations are also treated as read violations here?
it's more than fixing the comment above...

 

That's correct (all non instruction fetch violation mean all write violations are reported as read+write violations). Ideally we would be able to differentiate between write violations that were triggered by read-modify-write instructions and only mark those as read+write violations, but as we can't, here we take a safer approach to assume all write violations to be a read violation as well. It has the added benefit that now the VMX and SVM side both report violations the same way.

 


> SVM: Refine the handling to distinguish between read/write and instruction
> fetch access violations.
>
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@xxxxxxxxxxxx>
> ---
>  xen/arch/x86/hvm/svm/svm.c | 2 +-
>  xen/arch/x86/hvm/vmx/vmx.c | 3 ++-
>  2 files changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
> index 1f72e19..9531248 100644
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -1404,7 +1404,7 @@ static void svm_do_nested_pgfault(struct vcpu *v,
>      struct p2m_domain *p2m = NULL;
>
>      struct npfec npfec = {
> -        .read_access = 1, /* All NPFs count as reads */
> +        .read_access = !(pfec & PFEC_insn_fetch),
>          .write_access = !!(pfec & PFEC_write_access),
>          .insn_fetch = !!(pfec & PFEC_insn_fetch)
>      };
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index 656ce61..af0ad7c 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2354,7 +2354,8 @@ static void ept_handle_violation(unsigned long
> qualification, paddr_t gpa)
>      int ret;
>      struct domain *d = current->domain;
>      struct npfec npfec = {
> -        .read_access = !!(qualification & EPT_READ_VIOLATION),
> +        .read_access = !!(qualification & EPT_READ_VIOLATION) ||
> +                        !(qualification & EPT_EXEC_VIOLATION),
>          .write_access = !!(qualification & EPT_WRITE_VIOLATION),
>          .insn_fetch = !!(qualification & EPT_EXEC_VIOLATION)
>      };
> --
> 2.0.1

 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.