[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Xen-devel] PAE issue (32-on-64 work)
> As I had expressed before, I'm thinking that the current way of handling > the > top level of PAE paging is inappropriate, even after the above-4G > adjustments > that cured part of the problem. This is specifically because > - the handling here isn't consistent with how hardware behaves in the same > situation (though the Xen behavior is probably within range of the generic > architecture specification), in that the processor reads the 4 top level > entries > when CR3 gets re-loaded (and hence doesn't try to access them later in any > way), while Xen treats them (including potential updates to them) like just > on any level in the hierarchy > - the guest still needs to allocate a full page, even though only the first > 32 > bytes of it are actually used > - the shadowing done in Xen could be avoided altogether by following > hardware behavior. > > Just now I found that there is a resulting issue for the 32on64 work I'm > doing: Since none of the entries 4...511 of the PMD get initialized in > Linux, > and since Xen nevertheless has to validate all 512 entries (in order to > avoid making available translations that could be used during speculative > execution), the validation has the potential to fail (and does in reality), > resulting in the guest dying. The only option I presently see is to special > case the compatibility guest in the l3 handling and (I really hate to do > that) clear out the 518 supposedly unused entries (or at least clear > their present bits), meaning that no guest may ever make clever > assumptions and try to store some other data in the unused portion of > the pgd page. Why not just have a fixed per-vcpu L4 and L3, into which the 4 PAE L3's get copied on every cr3 load? It's most analogous to what happens today. We've thought of removing the page-size restriction on PAE L3's in the past, but it's pretty low down the priority list as it typically doesn't cost a great deal of memory. Ian _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |