[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] [qemu] xen_be_init under stubdom
On Wed, 19 Jan 2011, Kamala Narasimhan wrote: > Do nothing in xen_be_init under stubdom plus a minor inconsequential cleanup. > > Signed-off-by: Kamala Narasimhan <kamala.narasimhan@xxxxxxxxxx> > > Kamala > > diff --git a/hw/xen_backend.c b/hw/xen_backend.c > index d9be513..61e1210 100644 > --- a/hw/xen_backend.c > +++ b/hw/xen_backend.c > @@ -613,7 +613,7 @@ static void xenstore_update(void *unused) > > vec = xs_read_watch(xenstore, &count); > if (vec == NULL) > - goto cleanup; > + return; > > if (sscanf(vec[XS_WATCH_TOKEN], "be:%" PRIxPTR ":%d:%" PRIxPTR, > &type, &dom, &ops) == 3) > @@ -621,7 +621,6 @@ static void xenstore_update(void *unused) > if (sscanf(vec[XS_WATCH_TOKEN], "fe:%" PRIxPTR, &ptr) == 1) > xenstore_update_fe(vec[XS_WATCH_PATH], (void*)ptr); > > -cleanup: > qemu_free(vec); > } > > @@ -646,6 +645,10 @@ static void xen_be_evtchn_event(void *opaque) > > int xen_be_init(void) > { > +#ifdef CONFIG_STUBDOM > + return 0; > +#endif > + > xenstore = xs_daemon_open(); > if (!xenstore) { > xen_be_printf(NULL, 0, "can't connect to xenstored\n"); I think it would be better if we actually return an error from xen_be_init and we just print a warning in hw/xen_machine_fv.c if the backends fail to initialize instead of exit. It is OK to just call exit in hw/xen_machine_pv.c, because nothing is running in qemu apart from backends in that case. Also having another #ifdef CONFIG_STUBDOM might be OK in qemu-xen, but in qemu upstream we can get away without it implementing a function like: int xen_qemu_is_a_stubdom(); that returns 1 in case qemu is running in a stubdom and 0 otherwise. I would make domid_s a global int variable (see vl.c:5827) so that xen_qemu_is_a_stubdom can be implemented like this: return domid_s; _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |