[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5 1/3] ioreq: Unify buf and non-buf ioreq page management


  • To: Julian Vetter <julian.vetter@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Tue, 24 Mar 2026 16:21:28 +0100
  • Authentication-results: eu.smtp.expurgate.cloud; dkim=pass header.s=google header.d=suse.com header.i="@suse.com" header.h="Content-Transfer-Encoding:In-Reply-To:Autocrypt:From:Content-Language:References:Cc:To:Subject:User-Agent:MIME-Version:Date:Message-ID"
  • Autocrypt: addr=jbeulich@xxxxxxxx; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Tue, 24 Mar 2026 15:21:48 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 16.03.2026 12:17, Julian Vetter wrote:
> Switch the ioreq page mapping in hvm_map_ioreq_gfn() from
> prepare_ring_for_helper() / __map_domain_page_global() to explicit
> vmap(), aligning it with ioreq_server_alloc_mfn() which already
> allocates domain-heap pages and will now also map them via vmap().

In debug builds it did so before already, just indirectly through
map_domain_page_global(). You may want to adjust the wording slightly.

> With both paths using vmap(), vmap_to_page() can recover the struct
> page_info * uniformly during teardown, removing the need to cache the
> page pointer in struct ioreq_page. So, drop the 'page' field from struct
> ioreq_page and update all callers accordingly.
> 
> Signed-off-by: Julian Vetter <julian.vetter@xxxxxxxxxx>

What's missing is _why_ you actually want to make this change. Without
that info, one may want to reject the change for adding overhead for no
gain. This would then also help with naming choices like "base_gfn".

> @@ -128,8 +129,9 @@ static void hvm_unmap_ioreq_gfn(struct ioreq_server *s, 
> bool buf)
>      if ( gfn_eq(iorp->gfn, INVALID_GFN) )
>          return;
>  
> -    destroy_ring_for_helper(&iorp->va, iorp->page);
> -    iorp->page = NULL;
> +    put_page_and_type(vmap_to_page(iorp->va));
> +    vunmap(iorp->va);
> +    iorp->va = NULL;

In ioreq_server_deinit() you alter a comment regarding
arch_ioreq_server_unmap_pages(), which calls the function here. The
property described there looks to be lost.

Here (and in the counterpart function below) I think you also want to
leave a comment that this is effectively
{destroy,prepare}_ring_for_helper(), merely using vmap(). That'll
increase the chance of noticing a change is needed here as well in
case those functions are modified.

> @@ -157,17 +163,45 @@ static int hvm_map_ioreq_gfn(struct ioreq_server *s, 
> bool buf)
>      if ( d->is_dying )
>          return -EINVAL;
>  
> -    iorp->gfn = hvm_alloc_ioreq_gfn(s);
> +    base_gfn = hvm_alloc_ioreq_gfn(s);
>  
> -    if ( gfn_eq(iorp->gfn, INVALID_GFN) )
> +    if ( gfn_eq(base_gfn, INVALID_GFN) )
>          return -ENOMEM;
>  
> -    rc = prepare_ring_for_helper(d, gfn_x(iorp->gfn), &iorp->page,
> -                                 &iorp->va);
> -
> +    /*
> +     * vmap() is used for the Xen-side mapping so that vmap_to_page() can
> +     * recover the struct page_info * during teardown, consistent with
> +     * ioreq_server_alloc_mfn().
> +     */
> +    rc = check_get_page_from_gfn(d, base_gfn, false, &p2mt, &page);
>      if ( rc )
> -        hvm_unmap_ioreq_gfn(s, buf);

With the comment above addressed, I think this should be possible to keep.
(FTAOD the same isn't true for the other error paths further down.)

> @@ -262,8 +262,9 @@ static int ioreq_server_alloc_mfn(struct ioreq_server *s, 
> bool buf)
>  {
>      struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;

This is a good place to comment on the patch title: Here we're dealing (in
unified manner) with both buffered and non-buffered ioreq-s. There's nothing
being further unified in this regard. I think you must mean something else.

> @@ -309,14 +310,13 @@ static int ioreq_server_alloc_mfn(struct ioreq_server 
> *s, bool buf)
>  static void ioreq_server_free_mfn(struct ioreq_server *s, bool buf)
>  {
>      struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
> -    struct page_info *page = iorp->page;
> +    struct page_info *page;
>  
> -    if ( !page )
> +    if ( !iorp->va )
>          return;
>  
> -    iorp->page = NULL;
> -
> -    unmap_domain_page_global(iorp->va);
> +    page = vmap_to_page(iorp->va);
> +    vunmap(iorp->va);
>      iorp->va = NULL;
>  
>      put_page_alloc_ref(page);

Operations want re-ordering a little, to retain the prior property of what
the if() at the top is checking for getting cleared _before_ freeing /
unmapping anything.

> @@ -819,12 +821,12 @@ int ioreq_server_get_frame(struct domain *d, ioservid_t 
> id,
>          if ( !HANDLE_BUFIOREQ(s) )
>              goto out;
>  
> -        *mfn = page_to_mfn(s->bufioreq.page);
> +        *mfn = page_to_mfn(vmap_to_page(s->bufioreq.va));

You did look at what vmap_to_page() expands to, didn't you? If you did, didn't
it occur to you to use vmap_to_mfn() directly? (Applies elsewhere as well.)

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.