[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 03/17] xen/riscv: introduce guest domain's VMID allocation and manegement


  • To: Oleksii Kurochko <oleksii.kurochko@xxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Wed, 18 Jun 2025 17:46:05 +0200
  • Autocrypt: addr=jbeulich@xxxxxxxx; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL
  • Cc: Alistair Francis <alistair.francis@xxxxxxx>, Bob Eshleman <bobbyeshleman@xxxxxxxxx>, Connor Davis <connojdavis@xxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Julien Grall <julien@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Wed, 18 Jun 2025 15:46:32 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 10.06.2025 15:05, Oleksii Kurochko wrote:
> Implementation is based on Arm code with some minor changes:
>  - Re-define INVALID_VMID.
>  - Re-define MAX_VMID.
>  - Add TLB flushing when VMID is re-used.
> 
> Also, as a part of this path structure p2m_domain is introduced with
> vmid member inside it. It is necessary for VMID management functions.
> 
> Add a bitmap-based allocator to manage VMID space, supporting up to 127
> VMIDs on RV32 and 16,383 on RV64 platforms, in accordance with the
> architecture's hgatp VMID field (RV32 - 7 bit long, others - 14 bit long).
> 
> Reserve the highest VMID as INVALID_VMID to ensure it's not reused.

Why must that VMID not be (re)used? INVALID_VMID can be any value wider
than the hgatp.VMID field.

> --- a/xen/arch/riscv/Makefile
> +++ b/xen/arch/riscv/Makefile
> @@ -6,6 +6,7 @@ obj-y += intc.o
>  obj-y += irq.o
>  obj-y += mm.o
>  obj-y += pt.o
> +obj-y += p2m.o

Nit: Numbers typically sort ahead of letters.

> --- /dev/null
> +++ b/xen/arch/riscv/p2m.c
> @@ -0,0 +1,115 @@
> +#include <xen/bitops.h>
> +#include <xen/lib.h>
> +#include <xen/sched.h>
> +#include <xen/spinlock.h>
> +#include <xen/xvmalloc.h>
> +
> +#include <asm/p2m.h>
> +#include <asm/sbi.h>
> +
> +static spinlock_t vmid_alloc_lock = SPIN_LOCK_UNLOCKED;
> +
> +/*
> + * hgatp's VMID field is 7 or 14 bits. RV64 may support 14-bit VMID.
> + * Using a bitmap here limits us to 127 (2^7 - 1) or 16383 (2^14 - 1)
> + * concurrent domains.

Which is pretty limiting especially in the RV32 case. Hence why we don't
assign a permanent ID to VMs on x86, but rather manage IDs per-CPU (note:
not per-vCPU).

> The bitmap space will be allocated dynamically
> + * based on whether 7 or 14 bit VMIDs are supported.
> + */
> +static unsigned long *vmid_mask;
> +static unsigned long *vmid_flushing_needed;
> +
> +/*
> + * -2 here because:
> + *    - -1 is needed to get the maximal possible VMID

I don't follow this part.

> + *    - -1 is reserved for beinng used as INVALID_VMID

Whereas for this part - see above.

> + */
> +#ifdef CONFIG_RISCV_32
> +#define MAX_VMID (BIT(7, U) - 2)
> +#else

Better "#elif defined(CONFIG_RISCV_64)"?

> +#define MAX_VMID (BIT(14, U) - 2)
> +#endif
> +
> +/* Reserve the max possible VMID to be INVALID. */
> +#define INVALID_VMID (MAX_VMID + 1)
> +
> +void p2m_vmid_allocator_init(void)

__init

> +{
> +    /*
> +     * Allocate space for vmid_mask and vmid_flushing_needed
> +     * based on INVALID_VMID as it is the max possible VMID which just
> +     * was reserved to be INVALID_VMID.
> +     */
> +    vmid_mask = xvzalloc_array(unsigned long, BITS_TO_LONGS(INVALID_VMID));
> +    vmid_flushing_needed =
> +        xvzalloc_array(unsigned long, BITS_TO_LONGS(INVALID_VMID));

These both want to use MAX_VMID + 1; there's no logical connection here to
INVALID_VMID.

Furthermore don't you first need to determine how many bits hgatp.VMID actually
implements? The 7 and 14 bits respectively are maximum values only, after all.

VMIDLEN being permitted to be 0, how would you run more than one VM (e.g. Dom0)
on such a system?

> +    if ( !vmid_mask || !vmid_flushing_needed )
> +        panic("Could not allocate VMID bitmap space or VMID flushing map\n");
> +
> +    set_bit(INVALID_VMID, vmid_mask);

If (see above) this is really needed, __set_bit() please.

> +}
> +
> +int p2m_alloc_vmid(struct domain *d)

Looks like this can be static? (p2m_free_vmid() has no caller at all, so
it's not clear what use it is going to be.)

> +{
> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +
> +    int rc, nr;

No need for the blank line between the (few) declarations?

> +    spin_lock(&vmid_alloc_lock);
> +
> +    nr = find_first_zero_bit(vmid_mask, MAX_VMID);

As per this nr wants to be unsigned int.

> +    ASSERT(nr != INVALID_VMID);
> +
> +    if ( nr == MAX_VMID )
> +    {
> +        rc = -EBUSY;
> +        printk(XENLOG_ERR "p2m.c: dom%d: VMID pool exhausted\n", 
> d->domain_id);

Please use %pd.

> +        goto out;
> +    }
> +
> +    set_bit(nr, vmid_mask);

Since you do this under lock, even here __set_bit() ought to be sufficient.

> +    if ( test_bit(p2m->vmid, vmid_flushing_needed) )
> +    {
> +        clear_bit(p2m->vmid, vmid_flushing_needed);

And __clear_bit() here, or yet better use __test_and_clear_bit() in the if().

> +        sbi_remote_hfence_gvma_vmid(d->dirty_cpumask, 0, 0, p2m->vmid);

You're creating d; it cannot possibly have run on any CPU yet. IOW
d->dirty_cpumask will be reliably empty here. I think it would be hard to
avoid issuing the flush to all CPUs here in this scheme.

> +    }
> +
> +    p2m->vmid = nr;
> +
> +    rc = 0;
> +
> +out:

Nit: Style.

> +    spin_unlock(&vmid_alloc_lock);
> +    return rc;
> +}
> +
> +void p2m_free_vmid(struct domain *d)
> +{
> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +
> +    spin_lock(&vmid_alloc_lock);
> +
> +    if ( p2m->vmid != INVALID_VMID )
> +    {
> +        clear_bit(p2m->vmid, vmid_mask);
> +        set_bit(p2m->vmid, vmid_flushing_needed);

Does this scheme really avoid any flushes (except near when the system is
about to go down)?

As to choice of functions - see above.

> +    }
> +
> +    spin_unlock(&vmid_alloc_lock);
> +}
> +
> +int p2m_init(struct domain *d)
> +{
> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +    int rc;
> +
> +    p2m->vmid = INVALID_VMID;

Given the absence of callers of p2m_free_vmid() it's also not clear what use
this is.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.