[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v1 2/6] xen/riscv: introduce things necessary for p2m initialization
On 12.05.2025 11:24, Oleksii Kurochko wrote: > On 5/9/25 6:14 PM, Andrew Cooper wrote: >> On 09/05/2025 4:57 pm, Oleksii Kurochko wrote: >>> --- /dev/null >>> +++ b/xen/arch/riscv/p2m.c >>> @@ -0,0 +1,168 @@ >>> +#include <xen/domain_page.h> >>> +#include <xen/iommu.h> >>> +#include <xen/lib.h> >>> +#include <xen/mm.h> >>> +#include <xen/pfn.h> >>> +#include <xen/rwlock.h> >>> +#include <xen/sched.h> >>> +#include <xen/spinlock.h> >>> + >>> +#include <asm/page.h> >>> +#include <asm/p2m.h> >>> + >>> +/* >>> + * Force a synchronous P2M TLB flush. >>> + * >>> + * Must be called with the p2m lock held. >>> + * >>> + * TODO: add support of flushing TLB connected to VMID. >>> + */ >>> +static void p2m_force_tlb_flush_sync(struct p2m_domain *p2m) >>> +{ >>> + ASSERT(p2m_is_write_locked(p2m)); >>> + >>> + /* >>> + * TODO: shouldn't be this flush done for each physical CPU? >>> + * If yes, then SBI call sbi_remote_hfence_gvma() could >>> + * be used for that. >>> + */ >>> +#if defined(__riscv_hh) || defined(__riscv_h) >>> + asm volatile ( "hfence.gvma" ::: "memory" ); >>> +#else >>> + asm volatile ( ".insn r 0x73, 0x0, 0x31, x0, x0, x0" ::: "memory" ); >>> +#endif >> TLB flushing needs to happen for each pCPU which potentially has cached >> a mapping. >> >> In other arches, this is tracked by d->dirty_cpumask which is the bitmap >> of pCPUs where this domain is scheduled. > > I could only find usage of|d->dirty_cpumask| in x86 and common code (grant > tables) for flushing the TLB. However, it seems that|d->dirty_cpumask| is > not set anywhere for ARM. Is it sufficient to set a bit in|d->dirty_cpumask| > in|startup_cpu_idle_loop()|? No, how would the idle loop be relevant here? The bit needs setting for any pCPU a vCPU of the domain is running on, i.e. somewhere in context switch code. > In addition, it’s also necessary to set and clear bits in|d->dirty_cpumask| > during|context_switch|, correct? Set it before switching from the previous > domain, and clear it after switching to the new domain? > > Also, when a bit is set in|d->dirty_cpumask|, the|v->processor| value is also > stored in|v->dirty_cpu|. Is this needed to track which processor is > currently being used for the vCPU? > >> CPUs need to flush their TLBs before removing themselves from >> d->dirty_cpumask, which is typically done during context switch, but it >> means that to flush the P2M, you only need to IPI a subset of CPUs. > > I can't find where the P2M flush happens for x86/ARM. Could you please point > me > to where it is handled? Grep for ept_sync_domain, which will give you several involved functions (for the Intel, i.e. EPT case). > Also, I found guest_flush_tlb_mask() for x86. I assume that it is x86 specific > and generally it is enough to have only flush_tlb_mask(), right? Yes, that's an x86-specific helper which you may or may not want a counterpart of. Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |