[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Problem with IOMEM and domain reboot



On Mon, Feb 12, 2018 at 07:22:27PM +0200, Oleksandr Grytsov wrote:
> On Wed, Feb 7, 2018 at 2:14 PM, Oleksandr Andrushchenko <andr2000@xxxxxxxxx>
> wrote:
> 
> > On 02/06/2018 02:52 PM, Wei Liu wrote:
> >
> >> On Tue, Feb 06, 2018 at 02:44:56PM +0200, Oleksandr Andrushchenko wrote:
> >>
> >>>   From aa1f20af73a5a3c8f2c904b857a79334d18d41ff Mon Sep 17 00:00:00 2001
> >>>
> >>>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@xxxxxxxx>
> >>>>> Date: Wed, 20 Dec 2017 17:51:18 +0200
> >>>>> Subject: [PATCH] [HACK] Reset iomem's gfn to LIBXL_INVALID_GFN on
> >>>>> reboot
> >>>>>
> >>>>> During domain reboot its configuration is partially reused
> >>>>> to re-create a new domain, but iomem's GFN field for the
> >>>>> iomem is only restored for those memory ranges, which are
> >>>>> configured in form of [IOMEM_START,NUM_PAGES[@GFN], but not for
> >>>>> those in form of [IOMEM_START,NUM_PAGES], e.g. without GFN.
> >>>>> For the latter GFN is reset to 0, but while mapping ranges
> >>>>> to a domain during reboot there is a check that GFN treated
> >>>>> as valid if it is not equal to LIBXL_INVALID_GFN, thus making
> >>>>> Xen to map IOMEM_START to address 0 in the guest's address space.
> >>>>>
> >>>>> Workaround it by resseting GFN to LIBXL_INVALID_GFN, so xl
> >>>>> can set proper values for mapping on reboot.
> >>>>>
> >>>>> Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.
> >>>>> com>
> >>>>> ---
> >>>>>    tools/libxl/libxl_domain.c | 9 +++++++++
> >>>>>    1 file changed, 9 insertions(+)
> >>>>>
> >>>>> diff --git a/tools/libxl/libxl_domain.c b/tools/libxl/libxl_domain.c
> >>>>> index ef1a0927b00d..2678ad2ad54f 100644
> >>>>> --- a/tools/libxl/libxl_domain.c
> >>>>> +++ b/tools/libxl/libxl_domain.c
> >>>>> @@ -1647,6 +1647,15 @@ int libxl_retrieve_domain_configuration(libxl_ctx
> >>>>> *ctx, uint32_t domid,
> >>>>>            }
> >>>>>        }
> >>>>> +    /* reset IOMEM's GFN to initial value */
> >>>>> +    {
> >>>>> +        int i;
> >>>>> +
> >>>>> +        for (i = 0; i < d_config->b_info.num_iomem; i++)
> >>>>> +            if (d_config->b_info.iomem[i].gfn == 0)
> >>>>> +                d_config->b_info.iomem[i].gfn = LIBXL_INVALID_GFN;
> >>>>> +    }
> >>>>> +
> >>>>>
> >>>> I don't think this is necessary. Instead we should tell libxl to save
> >>>> the generated value into the template. Add an update_config hook for the
> >>>> iomem type should be better.
> >>>>
> >>> Agree, this is why I tagged the patch as [HACK]
> >>> Unfortunately, I have little knowledge of libxl and not sure
> >>> how to properly fix it. Can you tell a bit more on what
> >>> a proper fix could be?
> >>>
> >> See libxl__update_domain_configuration, which is called after domain
> >> construction is completed. It will call the update_config hook for a
> >> device type to save anything that is generated in the process of domain
> >> creation. One example is in libxl_nic. You can do the same to iomem I
> >> think.
> >>
> >> The end result is the generated values you care about are saved into the
> >> template. When the domain is migrated / rebooted libxl will use the
> >> saved values instead.
> >>
> > Thank you, will look at it to make a proper fix
> >
> > Strictly speaking your patch of adding the snippet to
> >> libxl_retrieve_domain_configuration isn't wrong, but I would prefer that
> >> function to only contain code to fetch states that can be changed during
> >> domain runtime. The iomem range isn't one of those states AIUI.
> >>
> >> Wei.
> >>
> >>
> >> Wei.
> >>>>
> >>> Thank you,
> >>> Oleksandr
> >>>
> >>
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxxxxxxxxx
> > https://lists.xenproject.org/mailman/listinfo/xen-devel
> >
> 
> 
> Hi Wei,
> 
> The root cause of this problem is that auto generated code doesn't handle
> default value when json is parsed. It is related not only IOMEM but
> potentially some other structure as well.
> 

Oh, that. It most likely affect primitive types.

> Field "gfn" of libxl_iomem_range structure has default value
> LIBXL_INVALID_GFN
> which is not 0.
> 
> libxl_iomem_range = Struct("iomem_range", [
>     ...
>     ("gfn", uint64, {'init_val': "LIBXL_INVALID_GFN"}),
>     ])
> 
> The default value is handled correctly when json is generated:
> 
> yajl_gen_status libxl_iomem_range_gen_json(yajl_gen hand, libxl_iomem_range
> *p)
> {
>     ...
> 
>     if (p->gfn != LIBXL_INVALID_GFN) {
>         s = yajl_gen_string(hand, (const unsigned char *)"gfn",
> sizeof("gfn")-1);
>         if (s != yajl_gen_status_ok)
>             goto out;
>         s = libxl__uint64_gen_json(hand, p->gfn);
>         if (s != yajl_gen_status_ok)
>             goto out;
>     }
> 
>     ...
> }
> 
> But when json is parsed, this "gfn" field is parsed as any other uint64
> value.
> As result we have 0 instead of LIBXL_INVALID_GFN.

Why? The above snippet says no output is generated if the value is
LIBXL_INVALID_GFN.

Hence ...

> 
> int libxl__iomem_range_parse_json(libxl__gc *gc, const libxl__json_object
> *o, libxl_iomem_range *p)
> {
>     ...
> 
>     {
>         const libxl__json_object *saved_gfn = x;
>         x = libxl__json_map_get("gfn", o, JSON_INTEGER);

... This should fail to get a value from the "gfn" key, right?

>         if (x) {
>             rc = libxl__uint64_parse_json(gc, x, &p->gfn);
>             if (rc)
>                 goto out;
>         }
>         x = saved_gfn;
>     }
> }
> 
> Is it done by design or there is an issue with parse_json?
> If it is done by design then the solution proposed by you (update_config
> hook)
> will solve this problem. But handling default value in parse json looks
> more correct.

I need to figure out what is going on before I can answer these
questions. :-)

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.