[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH V2 3/4] arm: allocate per-PCPU domheap pagetable pages



On Tue, 2013-04-23 at 12:38 +0100, Ian Campbell wrote:
> On Tue, 2013-04-23 at 12:33 +0100, Tim Deegan wrote:
> > Hi, 
> > 
> > At 12:21 +0100 on 23 Apr (1366719671), Ian Campbell wrote:
> > > > > +    /* Change to this CPUs pagetables */
> > > > > +    flush_xen_text_tlb();
> > > > > +
> > > > > +    ttbr = (uintptr_t) virt_to_maddr(this_cpu(xen_pgtable));
> > > > > +    WRITE_SYSREG64(ttbr, TTBR0_EL2);
> > > > 
> > > > isb() here, to make sure the CP write completes? 
> > > 
> > > Hrm, we don't have one in the primary CPU bring up either. 
> > > 
> > > flush_xen_text_tlb starts with an isb, which I suspect is sufficient? (I
> > > vaguely remember having a conversation along those lines when this code
> > > was changed to look like this).
> > 
> > Hrmn.  I guess that does turn out OK, but given that this happens
> > exactly once per CPU, the extra ISB is a cheap enough price to pay for
> > making the code obviously correct.
> 
> True. I refactored this code into:
>         static void write_ttbr(uint64_t ttbr)
>         {
>             flush_xen_text_tlb();
>         
>             WRITE_SYSREG64(boot_ttbr, TTBR0_EL2);
>             dsb(); /* ensure memory accesses do not cross over the TTBR0 
> write */
>             isb(); /* ensure that the TTBT write has completed */
>             flush_xen_text_tlb();
>         }

Weird, if I don't mark this function inline then it hangs at this point
(on the model).

If I inline then all is fine.

Sounds like a missing flush or barrier which is exposed by the return
"bx lr" or possibly the stack manipulations on the function epilogue.
But I can't spot it...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.