[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v5 13/13] xen/arm: add cache coloring support for Xen
Hi, On 15/01/2024 16:16, Julien Grall wrote: On 15/01/2024 15:43, Carlo Nonato wrote:Hi Julien,Hi Carlo,On Mon, Jan 15, 2024 at 12:18 PM Julien Grall <julien@xxxxxxx> wrote:On 15/01/2024 10:11, Carlo Nonato wrote:I understand what you're talking about, and it seems reasonable to get rid of xen_colored_temp[] and create_llc_coloring_mappings() since in the end they serve the purpose of mapping the physically colored space that is alreadymapped using xen_xenmap[] pagetables.What I don't understand is then how to copy/relocate Xen since I don't have adestination virtual space anymore to use in relocate_xen().You will need to link xen_xenmap[] in boot_second[...] as well. With that, you will be able to access the new Xen through the temporary area.Wouldn't it result in overwriting the current virtual space mapping? boot_second is the live page table and if I link xen_xenmap[] thenXEN_VIRT_START would point to the new colored space which is still empty atthis stage...If you link at XEN_VIRT_START then yes. But you could link at BOOT_RELOC_VIRT_START like you already do today.[...]Note that this means the init_ttbr cannot be written directly. But you can solve this problem by re-mapping the address.How to remap a single address?You should be able to use map_domain_page() to map the page where init_ttbr is.And if moving init_ttbr in the identity-mapped area means that it's no longer writable, so that I need to remap it, why moving it in that area in the firstplace. Again I think I'm missing something.The goal is to have everything used (code, data) before the MMU is turned on residing in a single page. So secondary CPUs can directly jump to the colored Xen without any trouble.This is what confuses me. Why having everything on a single page makes secondary cpus able to jump directly to colored Xen? (also see below)Because the code running with the MMU off can access easily access everything.3) To access the identity mapping area I would need some accessor that takes an address and returns it + phys_offset, or is there a better way to do it?I am not sure I understand what you mean. Can you clarify?In my idea, I would use the identity mapping to access the "old" variables,where "old" means non physically colored. init_ttbr is an example. WhenXen it's copied on the new physical space, init_ttbr is copied with it and if the boot cpu modifies this variable, it's actually touching the colored one and not the old one. This means that secondary CPUs that still haven't jumped to the new space, won't be able to see the new value and will nevergo online.So to access this "old" init_ttbr variable I need it's identity address, which is its current virtual address + some physical offset. I was askingyou if this is the right approach to use the identity mapping.Secondary CPUs would directly start on the colored Xen. So they will be able to access the "new" init_ttbr & co.How can this be true? I mean, in call_psci_cpu_on() I can start those CPUs inthe colored space, but they still use the boot_* pagetablesAre you looking at the 64-bit or 32-bit code? For 64-bit, staging is not using boot_* pagetable anymore for secondary CPUs. Instead, they directly jump to the runtime page-tables.and there I can'teasily link the new colored space, or, at least, I'm not succeding in doing that. What I tried at the moment is to link xen_xenmap in boot_second afterswitch_ttbr because of the problem I described above. But then secondary CPUs never go online...It would be helpful if you share some code.[...]... as I wrote ealier your current approach seems to have a flaw. As youoverwrite xen_bootmodule->{start, size}. setup_mm() will end up to add the old Xen region to the boot allocator. This is before any secondary CPUs are booted up.IOW, the allocator may provide some memory from the old Xen and nothinggood will happen from that. The only way to solve it is to add another module. So the memory is skipped by setup_mm(). However see below.Yes that should be memory that in the end would not be needed so it mustreturn to the boot-allocator (if that's what you mean). But how to do that?You can't really discard the old temporary Xen. This may work today because we don't support CPU hotplug or suspend/resume. But there was some series on the ML to enable it and I don't see any reason why someone would not want to use the features with cache coloring. So the old temporary Xen would have to be kept around forever. This is up to 8MB of memory wasted.The right approach is to have the secondary CPU boot code (including the variables it is using) fitting in the same page (or possibly multiple solong this is small and physically contiguous). With that it doesn't matter where is the trampoline, it could stay at the old place, but we would only waste a few pages rather than up 8MB as it is today.So what are you suggesting is to create a new section in the linker scriptfor the trampoline code and data,We already have a section for that in place (see .idmap.*) which happens to be at the beginning of Xen. Right now, the section is in text. Which is why it is read-only executable.then in setup_mm() we would skip this memory?We should not need this. Secondary boot CPUs should boot direclty on the colored Xen.Am I following you correctly? Sorry those topics are a little out of my preparation as you probably already guessed.No worries. I am happy to go in as much details as necessary. I can also attempt to write a patch if that helps. (unless someone else in the Arm maintainers want to give a try).Yes this would help. Thanks.I will try to have a look this evening. If I can't, it may have to wait a couple of weeks unless someone has time before hand. Series sent [1]. This is not fully complete for cache coloring. You will need to modify the identity functions to take into account that the identity map will be different. You will also want to check the new identity map area is still below (see the check in prepare_boot_identity_mapping()). Cheers, [1] https://lore.kernel.org/xen-devel/20240116115509.77545-1-julien@xxxxxxx/ -- Julien Grall
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |