[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 for-4.5 1/2] libxl: add support for 'channels'



On Tue, Sep 23, 2014 at 11:35:19AM +0100, Dave Scott wrote:
> 
> On 23 Sep 2014, at 11:16, Wei Liu <wei.liu2@xxxxxxxxxx> wrote:
> 
> > On Mon, Sep 22, 2014 at 05:17:20PM +0100, Ian Jackson wrote:
> >> David Scott writes ("[PATCH v5 for-4.5 1/2] libxl: add support for 
> >> 'channels'"):
> >>> A 'channel':
> >>>  - is a low-bandwidth private communication channel that resembles
> >>>    a physical serial port.
> >>>  - is implemented as a PV console with a well-known string name
> >>>    which is used to hook the channel to the appropriate software
> >>>    in the guest (i.e. some kind of guest agent).
> >>>  - has a backend 'connection' which describes what should happen
> >>>    to the data.
> >> 
> >> This looks good in principle.
> >> 
> >> I have a couple of easy quibbles:
> >> 
> >> You have forgotten to document that you transport the channel name in
> >> xenstore in the subkey `name'.  This should be in your `console.txt'
> >> patch I think, given that xenstore-paths.markdown refers to that for
> >> all the nodes in .../console.
> >> 
> >> And the interaction between consoles in the console part of the domain
> >> configuration and the channels listed in the channels part, should be
> >> documented.  (Even if it's just that consoles always have no name and
> >> channels always have one.)
> >> 
> >> 
> >> I have a harder one:
> >> 
> >>> +int libxl__init_console_from_channel(libxl__gc *gc,
> >>> +                                     libxl__device_console *console,
> >>> +                                     int dev_num,
> >>> +                                     libxl_device_channel *channel)
> >> 
> >> AFAICT this function is used to recreate the domain's channel
> >> configuration info for the benefit of libxl's caller (making
> >> enquiries) and setting up the qemu arguments.
> >> 
> >> But nowadays, following Wei's save/restore/JSON patches, I think we
> >> would expect libxl to retrieve this information from the JSON
> >> configuration.  That is, the console information should be in the
> >> stored JSON config and can be retrieved from there.
> >> 
> > 
> > The design Dave proposed has dedicated libxl_device_channel structure,
> > and that's the thing being saved in JSON.
> > 
> > AIUI this channel device happens to be backed by a QEMU console device,
> > which doesn't precludes it from being backed by dedicated backend or
> > other devices. In this case I think generating a console device
> > internally in libxl is actually the right thing to do.
> > 
> > Dave, am I right about the design?
> 
> Thatâs right. Iâve added a libxl_device_channel to the domain config,
> which should be being saved in the JSON (is that automatic? Iâve not
> looked at the code for it yet.) The libxl__device_console remains an

Yes, it will be saved in JSON provided xl feeds it to libxl in the first
place.

But if you plan to add in channel hotplug / remove functionality, you
need to look at how other devices do that.

> internal type. The existing domain create codepath synthesises an
> instance of libxl__device_console to represent the primary console. My
> patches extend this to create secondary consoles to implement the
> channels.
> 
> So if a domain is saved then the libxl_device_channels should end up
> in the JSON. On restore, these libxl_device_channels should get
> reloaded from the JSON and internally mapped onto
> libxl__device_consoles â does that sound right?
> 

That sounds right to me.

Wei.

> Thanks,
> Dave
> > 
> > Wei.
> > 
> >> (There are also unfortunate security implications to reading the
> >> backend directory like that - if we have a driver domain, the qemu
> >> might get untrustworthy paths.)
> >> 
> >> Wei, am I right ?
> >> 
> >> 
> >> Thanks,
> >> Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.