[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] libxl: Introduce a template for devices with a controller



On Tue, Dec 01, 2015 at 05:03:28PM +0000, George Dunlap wrote:
> On Tue, Dec 1, 2015 at 3:58 PM, Wei Liu <wei.liu2@xxxxxxxxxx> wrote:
> > On Tue, Dec 01, 2015 at 12:09:58PM +0000, George Dunlap wrote:
> > [...]
> >> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
> >> index 6b73848..44e2951 100644
> >> --- a/tools/libxl/libxl.h
> >> +++ b/tools/libxl/libxl.h
> >> @@ -1396,6 +1396,71 @@ void libxl_vtpminfo_list_free(libxl_vtpminfo *, int 
> >> nr_vtpms);
> >>   *
> >>   *   This function does not interact with the guest and therefore
> >>   *   cannot block on the guest.
> >> + *
> >> + * Controllers
> >> + * -----------
> >> + *
> >> + * Most devices are treated individually.  Some classes of device,
> >> + * however, like USB or SCSI, inherently have the need to have a
> >> + * hierarchy of different levels, with lower-level devices "attached"
> >> + * to higher-level ones.  USB for instance has "controllers" at the
> >> + * top, which have buses, on which are devices, which consist of
> >> + * multiple interfaces.  SCSI has "hosts" at the top, then buses,
> >> + * targets, and LUNs.
> >> + *
> >> + * In that case, for each <class>, there will be a set of functions
> >> + * and types for each <level>.  For example, for <class>=usb, there
> >> + * may be <levels> ctrl (controller) and dev (device), with ctrl being
> >> + * level 0.
> >> + *
> >> + * libxl_device_<class><level0>_<function> will act more or
> >
> > Missed "level0" comment from Chunyan?
> 
> The only comment of Chunyan's I could find that has <level0> in it is
> actually correcting <type><level0> => <class><level0>.  Did I
> misunderstand, or did you? :-)

Oops. I misread. Sorry about the noise.

Wei.

> 
>  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.