|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] arm: Boot allocator fails with multi node memory
Hi Stefano, On 16/01/17 19:59, Stefano Stabellini wrote: On Mon, 16 Jan 2017, Julien Grall wrote:On 09/01/17 08:40, Jan Beulich wrote:On 07.01.17 at 07:05, <vijay.kilari@xxxxxxxxx> wrote:Question: Why this address is not mapped?. If mapped where this va is mapped?.Well, I think this is the wrong question to ask. Why would it be mapped if there's no memory there? To be honest, I don't think implementing virt_to_mfn using hardware instruction will result to a faster translation. If you look at the virt_to_mfn implementation on x86, it is a few instructions because it only cares about the direct mapping and xen binary address. In the case of ARM, virt_to_mfn is able to translate any addresses (such as vmap one). > However, it is important not to diverge too much from x86 to avoid this class of problems in the future. In other words, the semantics of virt_to_mfn must be the same on x86 and ARM. We can either: 1) change x86 to match ARM This could be as simple as adding a check, only for debug builds, in the x86 implementation of virt_to_mfn to make sure that the virtual address passed is mapped or mappable, throw an error if it is not. 2) change ARM to match x86 Add a special case in the arm64 implementation of virt_to_mfn to check if the address is in the directmap region and translate it directly, without ATS1HR and friends, in that case. Both directmap and xen virtual address would need to have a specific case. I am not a big fan of adding specific case because I think it is just a workaround to make the code happy. After all the MFN returned is just theoretical and should only be used to do checking. You will also lose the ability to catch as soon as possible a problem with page table. Cheers, -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |