|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [for 4.22 v5 02/18] xen/riscv: introduce VMID allocation and manegement
On 20.10.2025 17:57, Oleksii Kurochko wrote:
> --- /dev/null
> +++ b/xen/arch/riscv/vmid.c
> @@ -0,0 +1,193 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +
> +#include <xen/domain.h>
> +#include <xen/init.h>
> +#include <xen/sections.h>
> +#include <xen/lib.h>
> +#include <xen/param.h>
> +#include <xen/percpu.h>
> +
> +#include <asm/atomic.h>
> +#include <asm/csr.h>
> +#include <asm/flushtlb.h>
> +#include <asm/p2m.h>
> +
> +/* Xen command-line option to enable VMIDs */
> +static bool __ro_after_init opt_vmid = true;
> +boolean_param("vmid", opt_vmid);
Command line options, btw, want documenting in
docs/misc/xen-command-line.pandoc.
> +/*
> + * VMIDs partition the physical TLB. In the current implementation VMIDs are
> + * introduced to reduce the number of TLB flushes. Each time a guest-physical
> + * address space changes, instead of flushing the TLB, a new VMID is
> + * assigned. This reduces the number of TLB flushes to at most 1/#VMIDs.
> + * The biggest advantage is that hot parts of the hypervisor's code and data
> + * retain in the TLB.
> + *
> + * Sketch of the Implementation:
> + *
> + * VMIDs are a hart-local resource. As preemption of VMIDs is not possible,
> + * VMIDs are assigned in a round-robin scheme. To minimize the overhead of
> + * VMID invalidation, at the time of a TLB flush, VMIDs are tagged with a
> + * 64-bit generation. Only on a generation overflow the code needs to
> + * invalidate all VMID information stored at the VCPUs with are run on the
> + * specific physical processor. When this overflow appears VMID usage is
> + * disabled to retain correctness.
> + */
> +
> +/* Per-Hart VMID management. */
> +struct vmid_data {
> + uint64_t generation;
> + uint16_t next_vmid;
> + uint16_t max_vmid;
> + bool used;
> +};
> +
> +static DEFINE_PER_CPU(struct vmid_data, vmid_data);
> +
> +static unsigned int vmidlen_detect(void)
> +{
> + unsigned int vmid_bits;
> +
> + /*
> + * According to the RISC-V Privileged Architecture Spec:
> + * When MODE=Bare, guest physical addresses are equal to supervisor
> + * physical addresses, and there is no further memory protection
> + * for a guest virtual machine beyond the physical memory protection
> + * scheme described in Section "Physical Memory Protection".
> + * In this case, the remaining fields in hgatp must be set to zeros.
> + * Thereby it is necessary to set gstage_mode not equal to Bare.
> + */
> + ASSERT(gstage_mode != HGATP_MODE_OFF);
> + csr_write(CSR_HGATP,
> + MASK_INSR(gstage_mode, HGATP_MODE_MASK) | HGATP_VMID_MASK);
> + vmid_bits = MASK_EXTR(csr_read(CSR_HGATP), HGATP_VMID_MASK);
> + vmid_bits = flsl(vmid_bits);
> + csr_write(CSR_HGATP, _AC(0, UL));
> +
> + /*
> + * From RISC-V spec:
> + * Speculative executions of the address-translation algorithm behave
> as
> + * non-speculative executions of the algorithm do, except that they
> must
> + * not set the dirty bit for a PTE, they must not trigger an exception,
> + * and they must not create address-translation cache entries if those
> + * entries would have been invalidated by any SFENCE.VMA instruction
> + * executed by the hart since the speculative execution of the
> algorithm
> + * began.
> + *
> + * Also, despite of the fact here it is mentioned that when V=0 two-stage
> + * address translation is inactivated:
> + * The current virtualization mode, denoted V, indicates whether the
> hart
> + * is currently executing in a guest. When V=1, the hart is either in
> + * virtual S-mode (VS-mode), or in virtual U-mode (VU-mode) atop a
> guest
> + * OS running in VS-mode. When V=0, the hart is either in M-mode, in
> + * HS-mode, or in U-mode atop an OS running in HS-mode. The
> + * virtualization mode also indicates whether two-stage address
> + * translation is active (V=1) or inactive (V=0).
> + * But on the same side, writing to hgatp register activates it:
> + * The hgatp register is considered active for the purposes of
> + * the address-translation algorithm unless the effective privilege
> mode
> + * is U and hstatus.HU=0.
> + *
> + * Thereby it leaves some room for speculation even in this stage of
> boot,
> + * so it could be that we polluted local TLB so flush all guest TLB.
> + */
> + local_hfence_gvma_all();
That's a lot of redundancy with gstage_mode_detect(). The function call here
actually renders the one there redundant, afaict. Did you consider putting a
single instance at the end of it in pre_gstage_init()? Otherwise at least
don't repeat the comment here, but merely point at the other one?
> + return vmid_bits;
> +}
> +
> +void vmid_init(void)
This (and its helper) isn't __init because you intend to also call it during
bringup of secondary processors?
> +{
> + static int8_t g_vmid_used = -1;
Now that you're getting closer to the x86 original - __ro_after_init?
> + unsigned int vmid_len = vmidlen_detect();
> + struct vmid_data *data = &this_cpu(vmid_data);
> +
> + BUILD_BUG_ON((HGATP_VMID_MASK >> HGATP_VMID_SHIFT) >
> + (BIT((sizeof(data->max_vmid) * BITS_PER_BYTE), UL) - 1));
> +
> + data->max_vmid = BIT(vmid_len, U) - 1;
> + data->used = opt_vmid && (vmid_len > 1);
> +
> + if ( g_vmid_used < 0 )
> + {
> + g_vmid_used = data->used;
> + printk("VMIDs use is %sabled\n", data->used ? "en" : "dis");
> + }
> + else if ( g_vmid_used != data->used )
> + printk("CPU%u: VMIDs use is %sabled\n", smp_processor_id(),
> + data->used ? "en" : "dis");
> +
> + /* Zero indicates 'invalid generation', so we start the count at one. */
> + data->generation = 1;
> +
> + /* Zero indicates 'VMIDs use disabled', so we start the count at one. */
> + data->next_vmid = 1;
> +}
> +
> +void vmid_flush_vcpu(struct vcpu *v)
> +{
> + write_atomic(&v->arch.vmid.generation, 0);
> +}
> +
> +void vmid_flush_hart(void)
> +{
> + struct vmid_data *data = &this_cpu(vmid_data);
> +
> + if ( !data->used )
> + return;
> +
> + if ( likely(++data->generation != 0) )
> + return;
> +
> + /*
> + * VMID generations are 64 bit. Overflow of generations never happens.
> + * For safety, we simply disable ASIDs, so correctness is established; it
> + * only runs a bit slower.
> + */
> + printk("%s: VMID generation overrun. Disabling VMIDs.\n", __func__);
Is logging of the function name of any value here? Also, despite the x86
original havinbg it like this - generally no full stops please if log
messages. "VMID generation overrun; disabling VMIDs\n" would do.
> +bool vmid_handle_vmenter(struct vcpu_vmid *vmid)
> +{
> + struct vmid_data *data = &this_cpu(vmid_data);
> +
> + /* Test if VCPU has valid VMID. */
x86 has a ->disabled check up from here; why do you not check ->used?
> + if ( read_atomic(&vmid->generation) == data->generation )
> + return 0;
> +
> + /* If there are no free VMIDs, need to go to a new generation. */
> + if ( unlikely(data->next_vmid > data->max_vmid) )
> + {
> + vmid_flush_hart();
> + data->next_vmid = 1;
> + if ( !data->used )
> + goto disabled;
> + }
> +
> + /* Now guaranteed to be a free VMID. */
> + vmid->vmid = data->next_vmid++;
> + write_atomic(&vmid->generation, data->generation);
> +
> + /*
> + * When we assign VMID 1, flush all TLB entries as we are starting a new
> + * generation, and all old VMID allocations are now stale.
> + */
> + return (vmid->vmid == 1);
Minor: Parentheses aren't really needed here.
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |