[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2] ioreq: Check for out of bounds vCPU ID


  • To: Teddy Astie <teddy.astie@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Fri, 14 Nov 2025 18:23:29 +0000
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=iD+o1hjuFoNNfTYGi0JJpbGQkb819UHoDaqPZm8Sfdw=; b=sH7rZ6Y51AWAh5R+EYo++cI1Hw9YtkbUTfmYXpQ1osw7GdfPqazpl7o1wxN9CeC6T81mDTly8CtRM5bV5K5JHii5xyU1NKO6DxHMHLom5xs/Xv1jlgHkHzltdUtwxTctyUlWJUXkIbQtCf7lF5+B664W5QrAiGiY+o0LsT/YeZrPv4Ktdjwsf82amq8A93Oc77GORSVrrWQtba3b5oLfrz2U85WvcxtBvtz190h5hiCOZIF/Vu/O14vkpzofrlX0qWd6EvpR/ddhq1YFm3V8KnqxJxFzS6eBipejVvD1eR71B4Un2e3rA607XNoitGvXx1GULhXuuolI3tFKKKKzIQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=EoxDad3GquhWUer6jvYFL9WtOs/z/GU8CBFFID3vU1/zhgS9E8UU1cFGDbkmkUKQt8fFZVygoYcBTnyHupuYVcIeUG/p2rZCUImx+t0gmpdake6NrxS8W42jUfb17/MXTbFBgjWEWd/3zyAASXUCvJUF3aA5/GuoR6X5AJfA6QIJtEVMehHjkYPxWKRul9TBpc64g2u1nrDsrON+wCGcV/c0eXZbt+yS6eTMBmhGLcg2aBvOealpF6LxUTlVfD+GCV/oR4kS6dFz9Dxd3yx/fkZchWB6UW/aJRpC2NS3cniOYz0d3NJVQpseXZT2uOHS5nZF68XKX1CkfV7DxK1NLw==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Anthony PERARD <anthony.perard@xxxxxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Julien Grall <julien@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julian Vetter <julian.vetter@xxxxxxxxxx>
  • Delivery-date: Fri, 14 Nov 2025 18:23:49 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 14/11/2025 4:32 pm, Teddy Astie wrote:
> A 4K page appears to be able to hold 128 ioreq entries, which luckly
> matches the current vCPU limit. However, if we decide to increase the
> vCPU limit, that doesn't hold anymore and this function would now
> silently fetch a out of bounds pointer.
>
> All architectures have no more than 128 as vCPU limit on HVM guests,
> and have pages that are at most 4 KB, so this case doesn't occurs in
> with the current limits.
>
> Make sure that out of bounds attempts are reported and adjust the around
> logic to at worst crash the offending domain instead.
>
> No functional change.
>
> Reported-by: Julian Vetter <julian.vetter@xxxxxxxxxx>
> Signed-off-by: Teddy Astie <teddy.astie@xxxxxxxxxx>
> ---
> v2:
>  - check and report instead of ASSERT and eventually crash offending domain
>
>  xen/common/ioreq.c | 27 ++++++++++++++++++++++++---
>  1 file changed, 24 insertions(+), 3 deletions(-)
>
> diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
> index f5fd30ce12..a2a2dafe85 100644
> --- a/xen/common/ioreq.c
> +++ b/xen/common/ioreq.c
> @@ -100,7 +100,14 @@ static ioreq_t *get_ioreq(struct ioreq_server *s, struct 
> vcpu *v)
>      ASSERT((v == current) || !vcpu_runnable(v));
>      ASSERT(p != NULL);
>  
> -    return &p->vcpu_ioreq[v->vcpu_id];
> +    if ( likely(v->vcpu_id < (PAGE_SIZE / sizeof(struct ioreq))) )
> +        return &p->vcpu_ioreq[v->vcpu_id];
> +    else
> +    {
> +        gprintk(XENLOG_ERR, "Out of bounds vCPU %pv in ioreq server\n", v);
> +        WARN();
> +        return NULL;
> +    }
>  }

While I appreciate this is fixing a latent bug, it's very invasive *and*
needs removing as part of changing the 128 limit.  (i.e. its a lot of
churn for no benefit in the meantime.)

Xen cannot release in a position where the user can request 130 CPUs but
explodes like this on the top two.

Furthermore, the WARN() isn't ratelimited, yet this is
guest-triggerable.  What we need to do is borrow WARN_ON_ONCE() from
Linux for cases like this, where it's obvious in the logs but can't be
triggered repeatedly.

~Andrew



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.