[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] reboot driver domain, vifX.Y = NO-CARRIER?
On Fri, Apr 27, 2018 at 06:02:46PM +0100, Andrew Cooper wrote: > On 27/04/18 17:14, Jason Cooper wrote: > > On Fri, Apr 27, 2018 at 04:52:57PM +0100, Andrew Cooper wrote: > >> On 27/04/18 16:35, Jason Cooper wrote: > >>> On Fri, Apr 27, 2018 at 04:11:39PM +0100, Andrew Cooper wrote: > >>>> On 27/04/18 16:03, Jason Cooper wrote: > >>>>> The problem occurs when I reboot a driver domain. Regardless of the > >>>>> type of guest attached to it, I'm unable to re-establish connectivity > >>>>> between the driver domain and the re-attached guest. e.g. I reboot > >>>>> GW/FW, then re-attach VM1, VM2 and the rest. No matter how I do it, I > >>>>> get: > >>>>> > >>>>> $ ip link > >>>>> ... > >>>>> 11: vif20.1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq > >>>>> master br10 qlen 32 > >>>>> link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff > >>>>> > >>>>> In the driver domain. At this point, absolutely no packets flow between > >>>>> the two VMs. Not even ARP. The only solution, so far, is to > >>>>> unnecessarily > >>>>> reboot the PV guests. After that, networking is fine. > >>>>> > >>>>> Any thoughts? > >>>> The underlying problem is that the frontend/backend setup in xenstore > >>>> encodes the domid in path, and changing that isn't transparent to the > >>>> guest at all. > >>> Oh joy. Would seem to make more send to use the domain name or the > >>> uuid... > >> domids are also used in the grant and event hypercall interfaces with Xen. > >> > >> There is no way this horse is being put back in its stable... > > :-( > > > >>>> The best idea we came up with was to reboot the driver domain and reuse > >>>> its old domid, at which point all the xenstore paths would remain > >>>> valid. There is support in Xen for explicitly choosing the domid of a > >>>> domain, but I don't think that it is wired up sensibly in xl. > >>> hmmm, yes. It's not wired up at all afaict. Mind giving me a hint on > >>> how to reuse the domid? > >> xc_domain_create() takes a domid value by pointer. Passing a value > >> other than zero will cause Xen to use that domid, rather than by > >> searching for the next free domid. > >> > >> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c > >> index b5e27a7..7866092 100644 > >> --- a/tools/libxl/libxl_create.c > >> +++ b/tools/libxl/libxl_create.c > >> @@ -583,6 +583,7 @@ int libxl__domain_make(libxl__gc *gc, > >> libxl_domain_config *d_config, > >> goto out; > >> } > >> > >> + *domid = atoi(getenv("OVERRIDE_DOMID") ?: "0"); > >> ret = xc_domain_create(ctx->xch, info->ssidref, handle, flags, > >> domid, > >> &xc_config); > >> if (ret < 0) { > >> > >> This gross hack may get you somewhere (Entirely untested). > > Gah! Yep, that's just what I needed, thanks! I don't suppose a patch > > series adding a 'domid' field to the domain config file would be > > rejected outright? That would allow callers of xl to use key=value for > > reboot scripts like mine, and also allow for a static domid setup of the > > driver domains if folks want that. > > That question would have to be deferred to the toolstack maintainers, > but some ability to manage exact domid's would be a very good thing. > > Having a domid= field would allow for very fine grain control, but > probably more control than most people want. Alternatively, having some > kind of "reuse_domid" field which booted the domain normally once, > recorded its domid, and reused that on reboot might be rather more useful. > To implement reuse_domid in a sane way, either the toolstack needs to manage all domids and always sets domid when creating domain or the hypervisor needs to cooperate -- to have interface to reserve / pre-allocate domids. Either should be doable. We should think a bit more which approach is better. Wei. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |