[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] libxl: enabling upstream qemu as pure pv backend.



On Wed, 2011-06-08 at 09:55 +0100, Ian Campbell wrote:
> On Wed, 2011-06-08 at 04:19 +0100, Wei Liu wrote:
> > commit 02cf9f9cfdf720c636c6ba08f795e49b5eb1f03e
> > Author: Wei Liu <liuw@xxxxxxxxx>
> > Date:   Wed Jun 8 11:13:25 2011 +0800
> > 
> >     libxl: enabling upstream qemu as pure pv backend.
> >     
> >     This patch makes device_model_{version,override} work for pure pv
> >     guest, so that users can specify upstream qemu as pure pv backend
> >     other than traditional qemu-xen.
> >     
> >     Signed-off-by: Wei Liu <liuw@xxxxxxxxx>
> > 
> > @@ -909,8 +909,8 @@ static int libxl__build_xenpv_qemu_args(libxl__gc *gc,
> >                                          libxl_device_model_info *info)
> >  {
> >      libxl_ctx *ctx = libxl__gc_owner(gc);
> > -    memset(info, 0x00, sizeof(libxl_device_model_info));
> 
> Why do you remove this memset?
> 

Because we are reusing the dm_info passed by ancestor callers, which has
already been filled with useful contents.

> > +    info->vnc = 0;
> >      if (vfb != NULL) {
> >          info->vnc = vfb->vnc;
> >          if (vfb->vnclisten)
> > @@ -927,9 +927,12 @@ static int libxl__build_xenpv_qemu_args(libxl__gc *gc,
> >          info->nographic = 1;
> >      info->domid = domid;
> >      info->dom_name = libxl_domid_to_name(ctx, domid);
> > -    info->device_model_version = 
> > LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL;
> > -    info->device_model = NULL;
> > -    info->type = LIBXL_DOMAIN_TYPE_PV;
> > +    info->target_ram = 0;
> > +    info->videoram = 0;
> > +    info->acpi = 0;
> > +    info->vcpus = 0;
> > +    info->vcpu_avail = 0;
> > +    info->xen_platform_pci = 0;
> >      return 0;
> >  }
> > diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> > index 5415bc5..74a77a7 100644
> > --- a/tools/libxl/xl_cmdimpl.c
> > +++ b/tools/libxl/xl_cmdimpl.c
> > @@ -626,6 +626,8 @@ static void parse_config_data(const char 
> > *configfile_filename_report,
> >      int pci_power_mgmt = 0;
> >      int pci_msitranslate = 1;
> >      int e;
> > +    XLU_ConfigList *dmargs;
> > +    int nr_dmargs = 0;
> >  
> >      libxl_domain_create_info *c_info = &d_config->c_info;
> >      libxl_domain_build_info *b_info = &d_config->b_info;
> > @@ -1075,14 +1077,10 @@ skip_vfb:
> >          break;
> >      }
> >  
> > -    if (c_info->hvm == 1) {
> > -        XLU_ConfigList *dmargs;
> > -        int nr_dmargs = 0;
> > -
> > -        /* init dm from c and b */
> > -        libxl_init_dm_info(dm_info, c_info, b_info);
> > +    /* init dm from c and b */
> > +    libxl_init_dm_info(dm_info, c_info, b_info);
> >  
> > -        /* then process config related to dm */
> > +    if (c_info->hvm == 1) {
> >          if (!xlu_cfg_get_string (config, "device_model", &buf)) {
> >              fprintf(stderr,
> >                      "WARNING: ignoring device_model directive.\n"
> > @@ -1147,6 +1145,42 @@ skip_vfb:
> >                  dm_info->extra[i] = a ? strdup(a) : NULL;
> >              }
> >          }
> > +    } else {
> > +        if (!xlu_cfg_get_string (config, "device_model", &buf)) {
> > +            fprintf(stderr,
> > +                    "WARNING: ignoring device_model directive.\n"
> > +                    "WARNING: Use \"device_model_override\" instead if you 
> > really want a non-default device_model\n");
> > +        }
> [...]
> > +        if (!xlu_cfg_get_list(config, "device_model_args", &dmargs, 
> > &nr_dmargs, 0))
> > +        {
> > +            int i;
> > +            dm_info->extra = xmalloc(sizeof(char *) * (nr_dmargs + 1));
> > +            dm_info->extra[nr_dmargs] = NULL;
> > +            for (i=0; i<nr_dmargs; i++) {
> > +                const char *a = xlu_cfg_get_listitem(dmargs, i);
> > +                dm_info->extra[i] = a ? strdup(a) : NULL;
> > +            }
> > +        }
> 
> There is no need to duplicate all this code, is there?
> 
> Just pull the relevant stuff from above out of the c_info->hvm == 1
> case.
> 

Ah, yes. The dmargs parsing can be pulled out.

But the WARNING statements parts are different, so I duplicate some
code. I will make it cleaner and resend.

> One general concern I have is there are cases where we want 2 DM
> instances. In particular we can have an FV instance running in a stub
> domain which uses a PV instance running in dom0 for certain
> functionality (e.g. emulated VGA in the FV stub domain qemu goes to an
> xenfb frontend talking to a xenfb backend running in a PV qemu in domain
> 0).
> 

Haven't come across such use case yet. Does libxl supports specifying
two DMs for one domain? What's the syntax?

> I'm not sure what the best solution here is, we could obviously
> duplicate up all the options but that seems unpleasant.
> 

Agreed.

> I guess for the most part we should treat both qemu's in this case as
> the same logical entity split into two brains, so most of the options
> are common to both (and e.g. the versions are matched etc) with libxl
> taking care of directing the individual options to the right instance
> (or both). Yeah, that sounds like the answer.
> 
> Only exception is the device_model_args option where I can see that
> passing different extra args to each instance would be useful and it is
> unlikely that libxl will understand them enough to choose where to send
> them, in fact we probably want 3 varialbe, device_model_args and
> device_model_args_{pv,fv}.
> 

Well, we already have device_model_args_{old,new} which handle qemu-xen
and upstream qemu respectively. And the main goal of my patch is to
enable using upstream qemu as pv backend, so device_model_args_{pv,fv}
may not be sufficient -- we have two qemus for pv. The handling logic
will be complicated.

> Ian.
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.