[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC] xen/arm: Handling cache maintenance instructions by set/way
>>> On 06.12.17 at 13:58, <julien.grall@xxxxxxxxxx> wrote: > On 12/06/2017 12:28 PM, George Dunlap wrote: >> 2. It sounds like rather than using PoD, you could use the >> "misconfigured p2m table" technique that x86 uses: set bits in the p2m >> entry which cause a specific kind of HAP fault when accessed. The fault >> handler then looks in the p2m entry, and if it finds an otherwise valid >> entry, it just fixes the "misconfigured" bits and continues. > > I thought about this. But when do you set the entry to misconfigured? What we do in x86 is that we flag all entries at the top level as misconfigured at any time where otherwise we would have to walk the full tree. Upon access, the misconfigured flag is being propagated down the page table hierarchy, with only the intermediate and leaf entries needed for the current access becoming properly configured again. In your case, as long as only a limited set of leaf entries are being touched before any S/W emulation is needed, you'd be able to skip all misconfigured entries in your traversal, just like with PoD you'd skip unpopulated ones. > If you take the example of Linux 32-bit. There are a couple of full > cache clean during the boot of uni-processor. So you would need to go > through the p2m multiple time and reset the access bits. The proposed mechanism isn't really similar to traditional accessed bit handling. If there is no other use for the accessed bit (assuming there is one in ARM PTEs in the first place), and as long as the bit being clear gives you some sort of signal (or x86 this and the dirty bit are being updated by hardware, as kind of a side effect of a page table walk), it could of course be used for the purpose here. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |