[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/2] xen/arm: vpl011: Fix domain_vpl011_init error path





On 23/03/2023 12:13, Michal Orzel wrote:
Hi Julien,

On 23/03/2023 12:33, Julien Grall wrote:


On 23/03/2023 11:10, Michal Orzel wrote:
Hi Julien,

On 22/03/2023 17:19, Julien Grall wrote:


Hi Michal,

On 22/03/2023 10:29, Michal Orzel wrote:
When vgic_reserve_virq() fails and backend is in domain, we should also
free the allocated event channel.

When backend is in Xen and call to xzalloc() returns NULL, there is no
need to call xfree(). This should be done instead on an error path
from vgic_reserve_virq().

Most likely this was implemented this way to avoid a double "if (
vpl011->backend_in_domain)". TBH, I am not very thrilled with this
approach. Could we instead consider to use domain_pl011_deinit()? (A
couple of tweak would be necessary to use it)
I think we could. More about it later.


Also, take the opportunity to return -ENOMEM
instead of -EINVAL when memory allocation fails.

Fixes: 1ee1e4b0d1ff ("xen/arm: Allow vpl011 to be used by DomU")
Signed-off-by: Michal Orzel <michal.orzel@xxxxxxx>
---
    xen/arch/arm/vpl011.c | 11 +++++++----
    1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/vpl011.c b/xen/arch/arm/vpl011.c
index 541ec962f189..df29a65ad365 100644
--- a/xen/arch/arm/vpl011.c
+++ b/xen/arch/arm/vpl011.c
@@ -696,8 +696,8 @@ int domain_vpl011_init(struct domain *d, struct 
vpl011_init_info *info)
            vpl011->backend.xen = xzalloc(struct vpl011_xen_backend);
            if ( vpl011->backend.xen == NULL )
            {
-            rc = -EINVAL;
-            goto out1;
+            rc = -ENOMEM;
+            goto out;
            }
        }

@@ -720,12 +720,15 @@ int domain_vpl011_init(struct domain *d, struct 
vpl011_init_info *info)
    out2:
        vgic_free_virq(d, vpl011->virq);

+    if ( vpl011->backend_in_domain )
+        free_xen_event_channel(d, vpl011->evtchn);
+    else
+        xfree(vpl011->backend.xen);

There is another bug here (unrelated to your change). You want to use
XFREE() to avoid an extra free in domain_pl011_deinit(). Can you look at it?
Strictly speaking this is not a bug. Memory allocation can only happen if 
backend is in Xen.
This means, that if vpl011 init fails, we will call free only once 
(domain_vpl011_deinit
will not be called on this path i.e. we will invoke panic after construct_domU).

Well yes, in the current use this is not a real bug (it is only latent).
But the same reasoning is also true for adding the call to
free_xen_event_channel() because we would not continue to run the domain
if domain_vpl011_init() is failing (even when the backend is in the
domain). And even if we were going to continue this is just a channel
that cannot be used. It will get free when the domain is destroyed
(either explicitly in deinit() or by evtchn_destroy()).

Of course, we could switch to XFREE just for sanity.
This is just not about sanity here. You are relying on how the caller is
behaving. And we have no guarantee this is going to be the same forever.
For instance, one may decide that it would fine to continue even if
construct_domU() is failing (e.g. because the domain is not critical).
At this point, this would become a real bug.
ok, makes sense.


+
    out1:
        if ( vpl011->backend_in_domain )
            destroy_ring_for_helper(&vpl011->backend.dom.ring_buf,
                                    vpl011->backend.dom.ring_page);
-    else
-        xfree(vpl011->backend.xen);

    out:
        return rc;

Solution to reuse domain_pl011_deinit would be as follows:

       vgic_free_virq(d, vpl011->virq);

We should move this call in domain_vpl011_deinit();
True and I think it does not need any guard as in case of a not registered virq 
it will
just clear the already cleared bit.

Technically it could have been reserved by someone else afterwards. So it would be best to 0 it (we allocate a SPI so we could use 0 as invalid).

Cheers,

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.