[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 03/17] xen/riscv: introduce guest domain's VMID allocation and manegement


  • To: Oleksii Kurochko <oleksii.kurochko@xxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>
  • From: Juergen Gross <jgross@xxxxxxxx>
  • Date: Thu, 26 Jun 2025 13:43:29 +0200
  • Authentication-results: smtp-out2.suse.de; dkim=pass header.d=suse.com header.s=susede1 header.b=PWXFpEMw
  • Autocrypt: addr=jgross@xxxxxxxx; keydata= xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOB ycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJve dYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJ NwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvx XP3FAp2pkW0xqG7/377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEB AAHNH0p1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT7CwHkEEwECACMFAlOMcK8CGwMH CwkIBwMCAQYVCAIJCgsEFgIDAQIeAQIXgAAKCRCw3p3WKL8TL8eZB/9G0juS/kDY9LhEXseh mE9U+iA1VsLhgDqVbsOtZ/S14LRFHczNd/Lqkn7souCSoyWsBs3/wO+OjPvxf7m+Ef+sMtr0 G5lCWEWa9wa0IXx5HRPW/ScL+e4AVUbL7rurYMfwCzco+7TfjhMEOkC+va5gzi1KrErgNRHH kg3PhlnRY0Udyqx++UYkAsN4TQuEhNN32MvN0Np3WlBJOgKcuXpIElmMM5f1BBzJSKBkW0Jc Wy3h2Wy912vHKpPV/Xv7ZwVJ27v7KcuZcErtptDevAljxJtE7aJG6WiBzm+v9EswyWxwMCIO RoVBYuiocc51872tRGywc03xaQydB+9R7BHPzsBNBFOMcBYBCADLMfoA44MwGOB9YT1V4KCy vAfd7E0BTfaAurbG+Olacciz3yd09QOmejFZC6AnoykydyvTFLAWYcSCdISMr88COmmCbJzn sHAogjexXiif6ANUUlHpjxlHCCcELmZUzomNDnEOTxZFeWMTFF9Rf2k2F0Tl4E5kmsNGgtSa aMO0rNZoOEiD/7UfPP3dfh8JCQ1VtUUsQtT1sxos8Eb/HmriJhnaTZ7Hp3jtgTVkV0ybpgFg w6WMaRkrBh17mV0z2ajjmabB7SJxcouSkR0hcpNl4oM74d2/VqoW4BxxxOD1FcNCObCELfIS auZx+XT6s+CE7Qi/c44ibBMR7hyjdzWbABEBAAHCwF8EGAECAAkFAlOMcBYCGwwACgkQsN6d 1ii/Ey9D+Af/WFr3q+bg/8v5tCknCtn92d5lyYTBNt7xgWzDZX8G6/pngzKyWfedArllp0Pn fgIXtMNV+3t8Li1Tg843EXkP7+2+CQ98MB8XvvPLYAfW8nNDV85TyVgWlldNcgdv7nn1Sq8g HwB2BHdIAkYce3hEoDQXt/mKlgEGsLpzJcnLKimtPXQQy9TxUaLBe9PInPd+Ohix0XOlY+Uk QFEx50Ki3rSDl2Zt2tnkNYKUCvTJq7jvOlaPd6d/W0tZqpyy7KVay+K4aMobDsodB3dvEAs6 ScCnh03dDAFgIq5nsB11j3KPKdVoPlfucX2c7kGNH+LUMbzqV6beIENfNexkOfxHfw==
  • Cc: Alistair Francis <alistair.francis@xxxxxxx>, Bob Eshleman <bobbyeshleman@xxxxxxxxx>, Connor Davis <connojdavis@xxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Julien Grall <julien@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Thu, 26 Jun 2025 11:43:43 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 26.06.25 13:34, Oleksii Kurochko wrote:

On 6/26/25 12:41 PM, Jan Beulich wrote:
On 26.06.2025 12:05, Oleksii Kurochko wrote:
On 6/24/25 4:01 PM, Jan Beulich wrote:
On 24.06.2025 15:47, Oleksii Kurochko wrote:
On 6/24/25 12:44 PM, Jan Beulich wrote:
On 24.06.2025 11:46, Oleksii Kurochko wrote:
On 6/18/25 5:46 PM, Jan Beulich wrote:
On 10.06.2025 15:05, Oleksii Kurochko wrote:
--- /dev/null
+++ b/xen/arch/riscv/p2m.c
@@ -0,0 +1,115 @@
+#include <xen/bitops.h>
+#include <xen/lib.h>
+#include <xen/sched.h>
+#include <xen/spinlock.h>
+#include <xen/xvmalloc.h>
+
+#include <asm/p2m.h>
+#include <asm/sbi.h>
+
+static spinlock_t vmid_alloc_lock = SPIN_LOCK_UNLOCKED;
+
+/*
+ * hgatp's VMID field is 7 or 14 bits. RV64 may support 14-bit VMID.
+ * Using a bitmap here limits us to 127 (2^7 - 1) or 16383 (2^14 - 1)
+ * concurrent domains.
Which is pretty limiting especially in the RV32 case. Hence why we don't
assign a permanent ID to VMs on x86, but rather manage IDs per-CPU (note:
not per-vCPU).
Good point.

I don't believe anyone will use RV32.
For RV64, the available ID space seems sufficiently large.

However, if it turns out that the value isn't large enough even for RV64,
I can rework it to manage IDs per physical CPU.
Wouldn't that approach result in more TLB entries being flushed compared
to per-vCPU allocation, potentially leading to slightly worse performance?
Depends on the condition for when to flush. Of course performance is
unavoidably going to suffer if you have only very few VMIDs to use.
Nevertheless, as indicated before, the model used on x86 may be a
candidate to use here, too. See hvm_asid_handle_vmenter() for the
core (and vendor-independent) part of it.
IIUC, so basically it is just a round-robin and when VMIDs are ran out
then just do full guest TLB flush and start to re-use VMIDs from the start.
It makes sense to me, I'll implement something similar. (as I'm not really
sure that we needdata->core_asid_generation, probably, I will understand it 
better when
start to implement it)
Well. The fewer VMID bits you have the more quickly you will need a new
generation. And keep track of the generation you're at you also need to
track the present number somewhere.

What about then to allocate VMID per-domain?
That's what you're doing right now, isn't it? And that gets problematic when
you have only very few bits in hgatp.VMID, as mentioned below.
Right, I just phrased my question poorly—sorry about that.

What I meant to ask is: does the approach described above actually depend on 
whether
VMIDs are allocated per-domain or per-pCPU? It seems that the main advantage of
allocating VMIDs per-pCPU is potentially reducing the number of TLB flushes,
since it's more likely that a platform will have more than|VMID_MAX| domains 
than
|VMID_MAX| physical CPUs—am I right?
Seeing that there can be systems with hundreds or even thousands of CPUs,
I don't think I can agree here. Plus per-pCPU allocation would similarly
get you in trouble when you have only very few VMID bits.
But not so fast as in case of per-domain allocation, right?

I mean that if we have only 4 bits, then in case of per-domain allocation we 
will
need to do TLB flush + VMID re-assigning when we have more then 16 domains.

But in case of per-pCPU allocation we could run 16 domains on 1 pCPU and at the 
same
time in multiprocessor systems we have more pCPUs, which will allow us to run 
more
domains and avoid TLB flushes.
On other hand, it is needed to consider that it's unlikely that a domain will 
have
only one vCPU. And it is likely that amount of vCPUs will be bigger then an 
amount
of domains, so to have a round-robin approach (as x86) without permanent ID 
allocation
for each domain will work better then per-pCPU allocation.
Here you (appear to) say one thing, ...

In other words, I'm not 100% sure that I get a point why x86 chose per-pCPU 
allocation
instead of per-domain allocation with having the same VMID for all vCPUs of 
domains.
... and then here the opposite. Overall I'm in severe trouble understanding this
reply of yours as a whole, so I fear I can't really respond to it (or even just
parts thereof).

IIUC, x86 allocates VMIDs per physical CPU (pCPU) "dynamically" — these are just
sequential numbers, and once VMIDs run out on a given pCPU, there's no guarantee
that a vCPU will receive the same VMID again.

On the other hand, RISC-V currently allocates a single VMID per domain, and that
VMID is considered "permanent" until the domain is destroyed. This means we are
limited to at most VMID_MAX domains. To avoid this limitation, I plan to 
implement
a round-robin reuse approach: when no free VMIDs remain, we start a new 
generation
and begin reusing old VMIDs.

The only remaining design question is whether we want RISC-V to follow a global
VMID allocation policy (i.e., one VMID per domain, shared across all of its 
vCPUs),
or adopt a policy similar to x86 with per-CPU VMID allocation (each vCPU gets 
its
own VMID, local to the CPU it's running on).

Each policy has its own trade-offs. But in the case where the number of 
available
VMIDs is small (i.e., low VMIDLEN), a global allocation policy may be more 
suitable,
as it requires fewer VMIDs overall.

So my main question was:
What are the advantages of per-pCPU VMID allocation in scenarios with limited 
VMID
space, and why did x86 choose that design?

From what I can tell, the benefits of per-pCPU VMID allocation include:
- Minimized inter-CPU TLB flushes — since VMIDs are local, TLB entries don’t 
need
   to be invalidated on other CPUs when reused.
- Better scalability — this approach works better on systems with a large number
   of CPUs.
- Frequent VM switches don’t require global TLB flushes — reducing the overhead
   of context switching.
However, the downside is that this model consumes more VMIDs. For example,
if a single domain runs on 4 vCPUs across 4 CPUs, it will consume 4 VMIDs 
instead
of just one.

Consider you have 4 bits for VMIDs, resulting in 16 VMID values.

If you have a system with 32 physical CPUs and 32 domains with 1 vcpu each
on that system, your scheme would NOT allow to keep each physical cpu busy
by running a domain on it, as only 16 domains could be active at the same
time.


Juergen

Attachment: OpenPGP_0xB0DE9DD628BF132F.asc
Description: OpenPGP public key

Attachment: OpenPGP_signature.asc
Description: OpenPGP digital signature


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.