[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 7/7] vm-event/arm: implement support for control-register write vm-events



Hello Corneliu,

On 17/06/16 11:36, Corneliu ZUZU wrote:
On 6/16/2016 7:49 PM, Julien Grall wrote:
On 16/06/16 15:13, Corneliu ZUZU wrote:

[...]

Please mention that PRRR and NMRR are aliased to respectively MAIR0
and MAIR1. This will avoid to spend time trying to understanding why
the spec says they are trapped but you don't "handle" them.

I mentioned that in traps.h (see "AKA" comments). Will put the same
comment here then.

I noticed it later. But it was not obvious to find.

[...]

diff --git a/xen/arch/arm/vm_event.c b/xen/arch/arm/vm_event.c
new file mode 100644
index 0000000..3f23fec
--- /dev/null
+++ b/xen/arch/arm/vm_event.c
@@ -0,0 +1,112 @@

[...]

+#include <xen/vm_event.h>
+#include <asm/traps.h>
+
+#if CONFIG_ARM_64
+
+#define MWSINF_SCTLR    32,SCTLR_EL1
+#define MWSINF_TTBR0    64,TTBR0_EL1
+#define MWSINF_TTBR1    64,TTBR1_EL1
+#define MWSINF_TTBCR    64,TCR_EL1
+
+#elif CONFIG_ARM_32
+
+#define MWSINF_SCTLR    32,SCTLR
+#define MWSINF_TTBR0    64,TTBR0
+#define MWSINF_TTBR1    64,TTBR1

The values are the same as for arm64 (*_EL1 is aliased to * for
arm32). Please avoid duplication.

(see comment below about later reply)

Your later reply explain why you did not expose TTBR*_32 to ARM64, but does not explain why the 3 define above is the same as the ARM64.



+#define MWSINF_TTBR0_32 32,TTBR0_32
+#define MWSINF_TTBR1_32 32,TTBR1_32
+#define MWSINF_TTBCR    32,TTBCR
+
+#endif
+
+#define MWS_EMUL_(val, sz, r...) WRITE_SYSREG##sz((uint##sz##_t)
(val), r)

The cast is not necessary.

+#define MWS_EMUL(r) CALL_MACRO(MWS_EMUL_, w->value, MWSINF_##r)
+
+static inline void vcpu_enter_write_data(struct vcpu *v)
+{
+    struct monitor_write_data *w = &v->arch.vm_event.write_data;
+
+    if ( likely(MWS_NOWRITE == w->status) )
+        return;
+
+    switch ( w->status )
+    {
+    case MWS_SCTLR:
+        MWS_EMUL(SCTLR);
+        break;
+    case MWS_TTBR0:
+        MWS_EMUL(TTBR0);
+        break;
+    case MWS_TTBR1:
+        MWS_EMUL(TTBR1);
+        break;
+#if CONFIG_ARM_32
+    case MWS_TTBR0_32:
+        MWS_EMUL(TTBR0_32);
+        break;
+    case MWS_TTBR1_32:
+        MWS_EMUL(TTBR1_32);
+        break;
+#endif

Aarch32 kernel can return on an AArch64 Xen. This means that
TTBR{0,1}_32 could be trapped and the write therefore be properly
emulated.

MCR TTBRx writes from AArch32 guests appear as writes of the low 32-bits
of AArch64 TTBRx_EL1 (architecturally mapped).
MCRR TTBRx writes from AArch32 guests appear as writes to AArch64
TTBRx_EL1.
Therefore, in the end the register we need to write is still TTBRx_EL1.

Why would you want to notify write to TTBR*_32 when Xen is running in AArch32 mode and none in Aaarch64 mode?

The vm-events for an AArch32 guests should be exactly the same
regardless of Xen mode (i.e AArch32 vs AArch64).

[...]


@@ -0,0 +1,253 @@

[...]

+/*
+ * Emulation of system-register trapped writes that do not cause
+ * VM_EVENT_REASON_WRITE_CTRLREG monitor vm-events.
+ * Such writes are collaterally trapped due to setting the
HCR_EL2.TVM bit.
+ *
+ * Regarding aarch32 domains, note that from Xen's perspective
system-registers
+ * of such domains are architecturally-mapped to aarch64 registers
in one of
+ * three ways:
+ *  - low 32-bits mapping   (e.g. aarch32 DFAR -> aarch64
FAR_EL1[31:0])
+ *  - high 32-bits mapping  (e.g. aarch32 IFAR -> aarch64
FAR_EL1[63:32])
+ *  - full mapping          (e.g. aarch32 SCTLR -> aarch64 SCTLR_EL1)
+ *
+ * Hence we define 2 macro variants:
+ *  - TVM_EMUL_SZ variant, for full mappings
+ *  - TVM_EMUL_LH variant, for low/high 32-bits mappings
+ */
+#define TVM_EMUL_SZ(regs, hsr, val, sz,
r...)                           \
+{ \
+    if ( psr_mode_is_user(regs)
)                                       \

Those registers are not accessible at EL0.

Hmm, I have this question noted.
I remember finding from the manuals that a write from user-mode of those
regs would still trap to EL2, but I didn't test that yet.
Will put that to test and come back with results for v2.

Testing will not tell you if a trap will occur or not. The ARM ARM may define it as IMPLEMENTATION DEFINED.

From my understanding of the ARMv7 spec (B1.14.1 and B1.14.13 in ARM DDI 0406C.c), the instruction at PL0 (user mode) will not trap to the hypervisor:

"Setting HCR.TVM to 1 means that any attempt, to write to a Non-secure memory control register from a Non-secure PL1 or PL0 mode, that this reference manual does not describe as UNDEFINED , generates a Hyp Trap exception."

For ARMv8 (See description of HCR_EL2.TVM D7-1971 in ARM DDI 0487A.j), only NS EL1 write to the registers will be trapped.


+        return inject_undef_exception(regs,
hsr);                       \
+    WRITE_SYSREG##sz((uint##sz##_t) (val), r);

[...]

diff --git a/xen/include/asm-arm/vm_event.h
b/xen/include/asm-arm/vm_event.h
index 4e5a272..edf9654 100644
--- a/xen/include/asm-arm/vm_event.h
+++ b/xen/include/asm-arm/vm_event.h
@@ -30,6 +30,12 @@ static inline int vm_event_init_domain(struct
domain *d)

  static inline void vm_event_cleanup_domain(struct domain *d)
  {
+    struct vcpu *v;
+
+    for_each_vcpu ( d, v )
+        memset(&v->arch.vm_event, 0, sizeof(v->arch.vm_event));
+
+    memset(&d->arch.monitor, 0, sizeof(d->arch.monitor));
      memset(&d->monitor, 0, sizeof(d->monitor));
  }

@@ -41,7 +47,13 @@ static inline void
vm_event_toggle_singlestep(struct domain *d, struct vcpu *v)
  static inline
  void vm_event_register_write_resume(struct vcpu *v,
vm_event_response_t *rsp)
  {
-    /* Not supported on ARM. */
+    /* X86 VM_EVENT_REASON_MOV_TO_MSR could (but shouldn't) end-up
here too. */

This should be an ASSERT/BUG_ON then.

We can't ASSERT/BUG_ON here because rsp->reason is controlled by the
toolstack user.

Your comment says "shouldn't" which I interpret as this would be a bug if VM_EVENT_REASON_MOV_TO_MSR ends up here.

If not a BUG_ON, at least returning an error. We should not silently ignore something that shouldn't happen.

This has to do with the implementation of vm_event_resume.
What I think we should do instead is to surround "case
VM_EVENT_REASON_MOV_TO_MSR" there with #ifdef CONFIG_X86.
Would that be suitable?

x86 vm-event reason should never be accessible on ARM (similarly for ARM vm-event on x86). So the hypervisor should return an error if someone is calling with the wrong vm-event.

Regards,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.