[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 15/18] xen/arm: Resume memory management on Xen resume



On Mon, 12 Nov 2018, Julien Grall wrote:
> Hi,
> 
> On 11/12/18 11:30 AM, Mirela Simonovic wrote:
> > The MMU needs to be enabled in the resume flow before the context
> > can be restored (we need to be able to access the context data by
> > virtual address in order to restore it). The configuration of system
> > registers prior to branching to the routine that sets up the page
> > tables is copied from xen/arch/arm/arm64/head.S.
> > After the MMU is enabled, the content of TTBR0_EL2 is changed to
> > point to init_ttbr (page tables used at runtime).
> > 
> > At boot the init_ttbr variable is updated when a secondary CPU is
> > hotplugged. In the scenario where there is only one physical CPU in
> > the system, the init_ttbr would not be initialized for the use in
> > resume flow. To get the variable initialized in all scenarios in this
> > patch we add that the boot CPU updates init_ttbr after it sets the
> > page tables for runtime.
> > 
> > After the memory management is resumed, the SCTLR_WXN in SCTLR_EL2
> > has to be set in order to configure that a mapping cannot be both
> > writable and executable (this was configured prior to suspend).
> > This is done using an existing function (mmu_init_secondary_cpu).
> > 
> > Signed-off-by: Mirela Simonovic <mirela.simonovic@xxxxxxxxxx>
> > Signed-off-by: Saeed Nowshadi <saeed.nowshadi@xxxxxxxxxx>
> > 
> > ---
> > Changes in v2:
> > 
> > - Patch from v1:
> > "[XEN PATCH 17/21] xen/arm: Set SCTLR_WXN in SCTLR_EL2 on resume"
> > is squashed with this patch, because it is indeed related to resuming
> > the memory management
> > - Since the original patch was named 'Enable the MMU', and this is
> > not only enabling anymore, but the full resume of functionality, the
> > commit title and message is fixed
> > ---
> >   xen/arch/arm/arm64/entry.S | 88
> > ++++++++++++++++++++++++++++++++++++++++++++++
> >   xen/arch/arm/mm.c          |  1 +
> >   xen/arch/arm/suspend.c     |  6 ++++
> >   3 files changed, 95 insertions(+)
> > 
> > diff --git a/xen/arch/arm/arm64/entry.S b/xen/arch/arm/arm64/entry.S
> > index dbc4717903..5efa30e8ee 100644
> > --- a/xen/arch/arm/arm64/entry.S
> > +++ b/xen/arch/arm/arm64/entry.S
> > @@ -1,6 +1,7 @@
> >   #include <asm/asm_defns.h>
> >   #include <asm/current.h>
> >   #include <asm/macros.h>
> > +#include <asm/page.h>
> >   #include <asm/regs.h>
> >   #include <asm/alternative.h>
> >   #include <asm/smccc.h>
> > @@ -534,6 +535,93 @@ ENTRY(__context_switch)
> >           ret
> >     ENTRY(hyp_resume)
> > +        msr   DAIFSet, 0xf           /* Disable all interrupts */
> > +
> > +        tlbi  alle2
> > +        dsb   sy                     /* Ensure completion of TLB flush */
> > +        isb
> > +
> > +        ldr   x0, =start
> > +        adr   x19, start             /* x19 := paddr (start) */
> > +        sub   x20, x19, x0           /* x20 := phys-offset */
> > +
> > +        /* XXXX call PROCINFO_cpu_init here */
> > +
> > +        /* Set up memory attribute type tables */
> > +        ldr   x0, =MAIRVAL
> > +        msr   mair_el2, x0
> > +
> > +        /* Set up TCR_EL2:
> > +         * PS -- Based on ID_AA64MMFR0_EL1.PARange
> > +         * Top byte is used
> > +         * PT walks use Inner-Shareable accesses,
> > +         * PT walks are write-back, write-allocate in both cache levels,
> > +         * 48-bit virtual address space goes through this table. */
> > +        ldr   x0,
> > =(TCR_RES1|TCR_SH0_IS|TCR_ORGN0_WBWA|TCR_IRGN0_WBWA|TCR_T0SZ(64-48))
> > +        /* ID_AA64MMFR0_EL1[3:0] (PARange) corresponds to TCR_EL2[18:16]
> > (PS) */
> > +        mrs   x1, ID_AA64MMFR0_EL1
> > +        bfi   x0, x1, #16, #3
> > +
> > +        msr   tcr_el2, x0
> > +
> > +        /* Set up the SCTLR_EL2:
> > +         * Exceptions in LE ARM,
> > +         * Low-latency IRQs disabled,
> > +         * Write-implies-XN disabled (for now),
> > +         * D-cache disabled (for now),
> > +         * I-cache enabled,
> > +         * Alignment checking disabled,
> > +         * MMU translation disabled (for now). */
> > +        ldr   x0, =(HSCTLR_BASE)
> > +        msr   SCTLR_EL2, x0
> > +
> > +        /* Ensure that any exceptions encountered at EL2
> > +         * are handled using the EL2 stack pointer, rather
> > +         * than SP_EL0. */
> > +        msr spsel, #1
> > +
> > +        /* Rebuild the boot pagetable's first-level entries. The structure
> > +         * is described in mm.c.
> > +         *
> > +         * After the CPU enables paging it will add the fixmap mapping
> > +         * to these page tables, however this may clash with the 1:1
> > +         * mapping. So each CPU must rebuild the page tables here with
> > +         * the 1:1 in place. */
> > +
> > +        /* If Xen is loaded at exactly XEN_VIRT_START then we don't
> > +         * need an additional 1:1 mapping, the virtual mapping will
> > +         * suffice.
> > +         */
> > +        cmp   x19, #XEN_VIRT_START
> > +        cset  x25, eq                /* x25 := identity map in place, or
> > not */
> > +
> > +        /* Write Xen's PT's paddr into TTBR0_EL2 */
> > +        ldr   x4, =boot_pgtable     /* xen_pgtable    */
> > +        add   x4, x4, x20           /* x4 := paddr (boot_pagetable) */
> > +        msr   TTBR0_EL2, x4
> > +
> > +        /* Set up page tables */
> > +        bl    setup_page_tables
> > +
> > +        ldr   x1, =mmu_resumed      /* Explicit vaddr, not RIP-relative */
> > +        mrs   x0, SCTLR_EL2
> > +        orr   x0, x0, #SCTLR_M      /* Enable MMU */
> > +        orr   x0, x0, #SCTLR_C      /* Enable D-cache */
> > +        dsb   sy                    /* Flush PTE writes and finish reads */
> > +        msr   SCTLR_EL2, x0         /* now paging is enabled */
> > +        isb                         /* Now, flush the icache */
> > +        br    x1                    /* Get a proper vaddr into PC */
> > +
> > +mmu_resumed:
> > +        ldr   x4, =init_ttbr         /* VA of TTBR0_EL2 stashed by CPU 0 */
> 
> While I know that Xen is switching between page-tables in some place, this
> should not be done.  Indeed, such sequence may break coherency of the TLBs. In
> order to avoid that, you want to apply the break-before-sequence. I haven't
> fully think how to solve it in Xen properly today.
> 
> I am quite worried to introduce simlar sequence in Xen before fixing the
> current sequences.

We can turn this assembly code into a .macro, so that at least when we
get around to it, we just need to fix it one place. In fact, I noticed
that the code sequence right before "mmu_resumed" and the one right
after are both indentical to the ones head.S, it would be good to avoid
the duplication if possible.


> > +        ldr   x4, [x4]               /* Actual value */
> > +        dsb   sy
> > +        msr   TTBR0_EL2, x4
> > +        dsb   sy
> > +        isb
> > +        tlbi  alle2
> > +        dsb   sy                     /* Ensure completion of TLB flush */
> > +        isb
> >           b .
> >     /*

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.