[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2] tools/helpers: don't log errors when trying to load PVH xenstore-stubdom



On 27/01/2023 5:54 am, Juergen Gross wrote:
> When loading a Xenstore stubdom the loader doesn't know whether the
> lo be loaded kernel is a PVH or a PV one. So it tries to load it as
> a PVH one first, and if this fails it is loading it as a PV kernel.

Well, yes it does.

What might be missing is libxenguest's ability to parse the provided
kernel's ELF Notes ahead of trying to build the domain.

This is the same kind of poor design which has left us with a
dom0=pv|pvh cmdline option which must match the kernel the bootloader
gave us, if we want to not panic() on boot.

So while this might be an acceptable gross bodge in the short term, this...

>
> This results in errors being logged in case the stubdom is a PV kernel.
>
> Suppress those errors by setting the minimum logging level to
> "critical" while trying to load the kernel as PVH.
>
> Fixes: f89955449c5a ("tools/init-xenstore-domain: support xenstore pvh 
> stubdom")
> Signed-off-by: Juergen Gross <jgross@xxxxxxxx>
> ---
> V2:
> - retry PVH loading with logging if PV fails, too (Jan Beulich)
> ---
>  tools/helpers/init-xenstore-domain.c | 24 ++++++++++++++++--------
>  1 file changed, 16 insertions(+), 8 deletions(-)
>
> diff --git a/tools/helpers/init-xenstore-domain.c 
> b/tools/helpers/init-xenstore-domain.c
> index 04e351ca29..4e2f8d4da5 100644
> --- a/tools/helpers/init-xenstore-domain.c
> +++ b/tools/helpers/init-xenstore-domain.c
> @@ -31,6 +31,8 @@ static int memory;
>  static int maxmem;
>  static xen_pfn_t console_gfn;
>  static xc_evtchn_port_or_error_t console_evtchn;
> +static xentoollog_level minmsglevel = XTL_PROGRESS;
> +static void *logger;
>  
>  static struct option options[] = {
>      { "kernel", 1, NULL, 'k' },
> @@ -141,19 +143,29 @@ static int build(xc_interface *xch)
>          goto err;
>      }
>  
> +    /* Try PVH first, suppress errors by setting min level high. */

... needs to make the position clear.

/*  This is a bodge.  We can't currently inspect the kernel's ELF notes
ahead of attempting to construct a domain, so try PVH first, suppressing
errors by setting min level to high, and fall back to PV. */

~Andrew

>      dom->container_type = XC_DOM_HVM_CONTAINER;
> +    xtl_stdiostream_set_minlevel(logger, XTL_CRITICAL);
>      rv = xc_dom_parse_image(dom);
> +    xtl_stdiostream_set_minlevel(logger, minmsglevel);
>      if ( rv )
>      {
>          dom->container_type = XC_DOM_PV_CONTAINER;
>          rv = xc_dom_parse_image(dom);
>          if ( rv )
>          {
> -            fprintf(stderr, "xc_dom_parse_image failed\n");
> -            goto err;
> +            /* Retry PVH, now with normal logging level. */
> +            dom->container_type = XC_DOM_HVM_CONTAINER;
> +            rv = xc_dom_parse_image(dom);
> +            if ( rv )
> +            {
> +                fprintf(stderr, "xc_dom_parse_image failed\n");
> +                goto err;
> +            }
>          }
>      }
> -    else
> +
> +    if ( dom->container_type == XC_DOM_HVM_CONTAINER )
>      {
>          config.flags |= XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap;
>          config.arch.emulation_flags = XEN_X86_EMU_LAPIC;
> @@ -412,8 +424,6 @@ int main(int argc, char** argv)
>      char buf[16], be_path[64], fe_path[64];
>      int rv, fd;
>      char *maxmem_str = NULL;
> -    xentoollog_level minmsglevel = XTL_PROGRESS;
> -    xentoollog_logger *logger = NULL;
>  
>      while ( (opt = getopt_long(argc, argv, "v", options, NULL)) != -1 )
>      {
> @@ -456,9 +466,7 @@ int main(int argc, char** argv)
>          return 2;
>      }
>  
> -    logger = (xentoollog_logger *)xtl_createlogger_stdiostream(stderr,
> -                                                               minmsglevel, 
> 0);
> -
> +    logger = xtl_createlogger_stdiostream(stderr, minmsglevel, 0);
>      xch = xc_interface_open(logger, logger, 0);
>      if ( !xch )
>      {




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.