|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: Limitations for Running Xen on KVM Arm64
On 03/11/2025 13:09, haseeb.ashraf@xxxxxxxxxxx wrote: Hi, Hi, To clarify, Xen is using the local TLB version. So it should be vmalls12e1.If I understood correctly, won't HCR_EL2.FB makes local TLB, a broadcast one? HCR_EL2.FB only applies to EL1. So it depends who is setting it in the this situation. If it is Xen, then it would only apply to its VM. If it is KVM, then it would also apply to the nested Xen. Can you explain in what scenario exactly, can we use vmalle1? We can use vmalle1 in Xen for the situation we discussed. I was only pointing out that the implementation in KVM seems suboptimal. Before going into batching, do you have any data showing how often XENMEM_remove_from_physmap is called in your setup? Similar, I would be interested to know the number of TLBs flush within one hypercalls and whether the regions unmapped were contiguous.The number of times XENMEM_remove_from_physmap is invoked depends upon the size of each binary. Each hypercall invokes TLB instruction once. If I use persistent rootfs, then this hypercall is invoked almost 7458 times (+8 approx) which is equal to sum of kernel and DTB image pages: domainbuilder: detail: xc_dom_alloc_segment: kernel : 0x40000000 -> 0x41d1f200 (pfn 0x40000 + 0x1d20 pages) domainbuilder: detail: xc_dom_alloc_segment: devicetree : 0x48000000 -> 0x4800188d (pfn 0x48000 + 0x2 pages) And if I use ramdisk image, then this hypercall is invoked almost 222815 times (+8 approx) which is equal to sum of kernel, ramdisk and DTB image 4k pages. domainbuilder: detail: xc_dom_alloc_segment: kernel : 0x40000000 -> 0x41d1f200 (pfn 0x40000 + 0x1d20 pages) domainbuilder: detail: xc_dom_alloc_segment: module0 : 0x48000000 -> 0x7c93d000 (pfn 0x48000 + 0x3493d pages) domainbuilder: detail: xc_dom_alloc_segment: devicetree : 0x7c93d000 -> 0x7c93e8d9 (pfn 0x7c93d + 0x2 pages) You can see the address ranges in above logs, the addresses seem contiguous in this address space and at best we can reduce the number of calls to 3, each at the end of every image when removed from physmap. Thanks for the log. I haven't looked at the toolstack code. Does this mean only one ioctl call will be issue per blob will be used? we may still send a few TLBs because: * We need to avoid long-running operations, so the hypercall may restart. So we will have to flush at mininum before every restart * The current way we handle batching is we will process one item at the time. As this may free memory (either leaf or intermediate page-tables), we will need to flush the TLBs first to prevent the domain accessing the wrong memory. This could be solved by keeping track of the list of memory to free. But this is going to require some work and I am not entirely sure this is worth it at the moment.I think now you have the figure that 222815 TLBs are too much and a few TLBs would still be a lot better. TLBs less than 10 are not much noticeable. I agree this is too much but this is going to require quite a bit of work (as I said we would need to keep track of pages to be freed before the TLB flush). At least to me, it feels like switching to TLBI range (or a series os IPAS2E1IS) is an easier win. But if you feel like doing the larger rework, I would be happy to have a look to check whether it would be an acceptable change for upstream. We could use a series of TLBI IPAS2E1IS which I think is what TLBI range is meant to replace (so long the addresses are contiguous in the given space).Isn't IPAS2E1IS a range tlbi instruction? My understanding is that this instruction is available on processors with range TLBI support, I could be wrong. I saw its KVM emulation which does full invalidation if range TLBI is not supported (https://github.com/torvalds/linux/blob/master/arch/arm64/kvm/hyp/pgtable.c#L647). IPAS2E1IS only allows you to invalidate one address at the time and is available on all processors. The R version is only available when the processor support TLBI range and allow you to invalidate multiple contiguous address. Cheers, -- Julien Grall
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |