[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH V3 4/4] arm: allocate per-PCPU domheap pagetable pages



On Wed, 2013-04-24 at 13:49 +0100, Tim Deegan wrote:
> At 11:54 +0100 on 24 Apr (1366804441), Ian Campbell wrote:
> > +    /* Some of these slots may have been used during start of day and/or
> > +     * relocation. Make sure they are clear now. */
> > +    memset(this_cpu(xen_dommap), 0, DOMHEAP_SECOND_PAGES*PAGE_SIZE);
> > +    flush_xen_dcache_va_range(this_cpu(xen_dommap),
> > +                              DOMHEAP_SECOND_PAGES*PAGE_SIZE);
> > +}
> 
> This is a dcache flush -- do we need flush_xen_data_tlb_range_va()
> instead (or possibly as well)?

The reason for this zeroing is really for the benefit of the code in
map_domain_page which is looking at the present bits to find a free
slot. I had a crash due to it stumbling over the uninitialised memory in
the freshly allocated secondary CPUs dommap and figured I should zero
the boot CPUs one as well.

It probably doesn't actually matter that much if the MMU sees stale
entries here (in the case of the boot CPU they will be valid entries,
not invalid gunk like on the secondaries). At the point that
map_domain_page actually puts something useful in here it will do all
the right flushes (I hope!).

In other words I have a feeling even the flush which is there is not
strictly necessary. However it is consistent with write_pte, which is
effectively what the memset is doing.

>   Also: do we care about dirty cachelines
> from the earlier operations?  And in that case should we flush the cache
> before the PTs are cleared (and again afterwards to guard against
> prefetches)?

I sure hope those were all flushed as part of the relocation etc. I'm
pretty sure they must have been.

> Tim (still not convinced my mental model of ARM memory is right...)



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.