|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v5 1/3] ioreq: Unify buf and non-buf ioreq page management
On 16.03.2026 12:17, Julian Vetter wrote:
> Switch the ioreq page mapping in hvm_map_ioreq_gfn() from
> prepare_ring_for_helper() / __map_domain_page_global() to explicit
> vmap(), aligning it with ioreq_server_alloc_mfn() which already
> allocates domain-heap pages and will now also map them via vmap().
In debug builds it did so before already, just indirectly through
map_domain_page_global(). You may want to adjust the wording slightly.
> With both paths using vmap(), vmap_to_page() can recover the struct
> page_info * uniformly during teardown, removing the need to cache the
> page pointer in struct ioreq_page. So, drop the 'page' field from struct
> ioreq_page and update all callers accordingly.
>
> Signed-off-by: Julian Vetter <julian.vetter@xxxxxxxxxx>
What's missing is _why_ you actually want to make this change. Without
that info, one may want to reject the change for adding overhead for no
gain. This would then also help with naming choices like "base_gfn".
> @@ -128,8 +129,9 @@ static void hvm_unmap_ioreq_gfn(struct ioreq_server *s,
> bool buf)
> if ( gfn_eq(iorp->gfn, INVALID_GFN) )
> return;
>
> - destroy_ring_for_helper(&iorp->va, iorp->page);
> - iorp->page = NULL;
> + put_page_and_type(vmap_to_page(iorp->va));
> + vunmap(iorp->va);
> + iorp->va = NULL;
In ioreq_server_deinit() you alter a comment regarding
arch_ioreq_server_unmap_pages(), which calls the function here. The
property described there looks to be lost.
Here (and in the counterpart function below) I think you also want to
leave a comment that this is effectively
{destroy,prepare}_ring_for_helper(), merely using vmap(). That'll
increase the chance of noticing a change is needed here as well in
case those functions are modified.
> @@ -157,17 +163,45 @@ static int hvm_map_ioreq_gfn(struct ioreq_server *s,
> bool buf)
> if ( d->is_dying )
> return -EINVAL;
>
> - iorp->gfn = hvm_alloc_ioreq_gfn(s);
> + base_gfn = hvm_alloc_ioreq_gfn(s);
>
> - if ( gfn_eq(iorp->gfn, INVALID_GFN) )
> + if ( gfn_eq(base_gfn, INVALID_GFN) )
> return -ENOMEM;
>
> - rc = prepare_ring_for_helper(d, gfn_x(iorp->gfn), &iorp->page,
> - &iorp->va);
> -
> + /*
> + * vmap() is used for the Xen-side mapping so that vmap_to_page() can
> + * recover the struct page_info * during teardown, consistent with
> + * ioreq_server_alloc_mfn().
> + */
> + rc = check_get_page_from_gfn(d, base_gfn, false, &p2mt, &page);
> if ( rc )
> - hvm_unmap_ioreq_gfn(s, buf);
With the comment above addressed, I think this should be possible to keep.
(FTAOD the same isn't true for the other error paths further down.)
> @@ -262,8 +262,9 @@ static int ioreq_server_alloc_mfn(struct ioreq_server *s,
> bool buf)
> {
> struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
This is a good place to comment on the patch title: Here we're dealing (in
unified manner) with both buffered and non-buffered ioreq-s. There's nothing
being further unified in this regard. I think you must mean something else.
> @@ -309,14 +310,13 @@ static int ioreq_server_alloc_mfn(struct ioreq_server
> *s, bool buf)
> static void ioreq_server_free_mfn(struct ioreq_server *s, bool buf)
> {
> struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
> - struct page_info *page = iorp->page;
> + struct page_info *page;
>
> - if ( !page )
> + if ( !iorp->va )
> return;
>
> - iorp->page = NULL;
> -
> - unmap_domain_page_global(iorp->va);
> + page = vmap_to_page(iorp->va);
> + vunmap(iorp->va);
> iorp->va = NULL;
>
> put_page_alloc_ref(page);
Operations want re-ordering a little, to retain the prior property of what
the if() at the top is checking for getting cleared _before_ freeing /
unmapping anything.
> @@ -819,12 +821,12 @@ int ioreq_server_get_frame(struct domain *d, ioservid_t
> id,
> if ( !HANDLE_BUFIOREQ(s) )
> goto out;
>
> - *mfn = page_to_mfn(s->bufioreq.page);
> + *mfn = page_to_mfn(vmap_to_page(s->bufioreq.va));
You did look at what vmap_to_page() expands to, didn't you? If you did, didn't
it occur to you to use vmap_to_mfn() directly? (Applies elsewhere as well.)
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |