[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [DOC v6] PV Calls protocol design



On Wed, 28 Sep 2016, Konrad Rzeszutek Wilk wrote:
> On Wed, Sep 14, 2016 at 04:45:01PM -0700, Stefano Stabellini wrote:
> > Hi all,
> 
> Hey!
> 
> I took a look at the spec. It overall looks good with a detail
> of how each of the sub-commands works in exhaustive detail (which
> is awesome!).

Thank you! I really appreciate the review, and sorry for taking so long
to reply (last week I was in a conference).


> Please see below for some questions, comments.
> > 
> > This is the design document of the PV Calls protocol. You can find
> > prototypes of the Linux frontend and backend drivers here:
> > 
> > git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git pvcalls-6
> > 
> > To use them, make sure to enable CONFIG_XEN_PVCALLS in your kernel
> > config and add "pvcalls=1" to the command line of your DomU Linux
> > kernel. You also need the toolstack to create the initial xenstore nodes
> > for the protocol. To do that, please apply the attached patch to libxl
> > (the patch is based on Xen 4.7.0-rc3) and add "pvcalls=1" to your DomU
> > config file.
> > 
> > 
> > In this version I added a "reuse" field to the release command, to give
> > an hint to the backend that data ring pages and evtchn will be reused.
> > The backend is free to unmap them immediately as usual or keep them
> > mapped until they get reused (or until it decides it is time to free
> > them). Thanks to this optimization pv calls can handle more requests per
> > second than Docker with apachebench in my test environment.
> > 
> > I also added a "networking-calls" backend node on xenstore to allow the
> > backend to specify that is supporting networking calls. In a future when
> > other POSIX syscall sets are supported, a backend could decide to
> > implement only one set.
> > 
> > 
> > Please review!
> > 
> > Cheers,
> > 
> > Stefano
> > 
> > 
> > Changes in v6:
> > - add reuse field in release command
> > - add "networking-calls" backend node on xenstore
> > - fixed tab/whitespace indentation
> > 
> > Changes in v5:
> > - clarify text
> > - rename id to req_id
> > - rename sockid to id
> > - move id to request and response specific fields
> > - add version node to xenstore
> > 
> > Changes in v4:
> > - rename xensock to pvcalls
> > 
> > Changes in v3:
> > - add a dummy element to struct xen_xensock_request to make sure the
> >   size of the struct is the same on both x86_32 and x86_64
> > 
> > Changes in v2:
> > - add max-dataring-page-order
> > - add "Publish backend features and transport parameters" to backend
> >   xenbus workflow
> > - update new cmd values
> > - update xen_xensock_request
> > - add backlog parameter to listen and binary layout
> > - add description of new data ring format (interface+data)
> > - modify connect and accept to reflect new data ring format
> > - add link to POSIX docs
> > - add error numbers
> > - add address format section and relevant numeric definitions
> > - add explicit mention of unimplemented commands
> > - add protocol node name
> > - add xenbus shutdown diagram
> > - add socket operation
> > 
> > ---
> > 
> > # PV Calls Protocol version 1
> > 
> > ## Rationale
> > 
> > PV Calls is a paravirtualized protocol that allows the implementation of
> > a set of POSIX functions in a different domain. The PV Calls frontend
> 
> For folks not used to Xen, the word 'domain' may be confusing. They
> may think PCI domain (segments) or such.
> 
> Perhaps it may be good to replace the word domain with container?
> Or guest?
>
> Or have an section explaining what some of these words mean.
> > sends POSIX function calls to the backend, which implements them and
> 
> Like backend for example. It is crystal clear to me what this is, but
> for somebody who has been brewing in networking the word 'backend'
> may imply a web-server to them.
>
> > returns a value to the frontend.
> 
> And this they may associate with a client. Or a client application (e.g curl).

You are right about all of these points. I think it might be best to add
a glossary to the doc. I'll do that.


> > This version of the document covers networking function calls, such as
> > connect, accept, bind, release, listen, poll, recvmsg and sendmsg; but
> 
> NIT: Sort these alphabetically. It will be easier to expand this in the
> future if you have it in that order.

I ordered them by call order, which I think will make them easier to
understand. More on this later.


> > the protocol is meant to be easily extended to cover different sets of
> > calls. Unimplemented commands return ENOTSUPP.
> 
> Surely EOPNOTSUPP ?

Both exists, but I think that ENOTSUPP is more appropriate. More on this
later.


> > PV Calls provide the following benefits:
> > * full visibility of the guest behavior on the backend domain, allowing
> >   for inexpensive filtering and manipulation of any guest calls
> > * excellent performance
> 
> NIT: (feel free to ignore it) Missing periods or commans at the end of the 
> lines.

I did it on purpose for readability.


> > Specifically, PV Calls for networking offer these advantages:
> > * guest networking works out of the box with VPNs, wireless networks and
> >   any other complex configurations on the host
> > * guest services listen on ports bound directly to the backend domain IP
> >   addresses
> > * localhost becomes a secure namespace for inter-VMs communications
> > 
> > 
> > ## Design
> >
> 
> Would it make sense to mention the pre-requisite of why Xen is used?
> Your slides at the XPDS where quite good and perhaps some of the content
> here (VMCS, PV guests, etc) could be put here.
> 
> Naturally this is more folks who haven't been marinating in Xen for years
> to get an idea why it was choosen.

That's a good suggestion. I can mention that Xen PV guests run on
bare-metal as well as public clouds and that Xen PV guests are about
isolation without actually mimicking an existing physical machine, which
is a good fit for this project. Is there is another key point you think
I should write down, please let me know.


> > ### Xenstore
> > 
> > The frontend and the backend connect to each other exchanging information 
> > via
> 
> NIT: s/'to each other exchanging information'/'via xenstore to exchange 
> information'.
> 
> Because they really don't connect to each other - rather XenStore is the
> medium through which they 'talk'.

You are right


> > xenstore. The toolstack creates front and back nodes with state
> > XenbusStateInitialising. The protocol node name is **pvcalls**. There can 
> > only
> 
> Perhaps also mention the file or URL where the XenbusStateInitialising states
> are defined? Or even how to use Xenstore?
> 
> Oh, like http://xenbits.xen.org/docs/unstable/misc/xenstore.txt
> And 
> http://xenbits.xen.org/docs/unstable/hypercall/x86_64/include,public,io,xs_wire.h.html
> and 
> http://xenbits.xen.org/docs/unstable/hypercall/x86_64/include,public,io,xenbus.h.html
>  ?

Thanks, I'll add the pointers.


> > be one PV Calls frontend per domain.
> > 
> > #### Frontend XenBus Nodes
> > 
> > port
> 
> Not event-channel ?

I took "inspiration" from the console ring, which uses "port".


> >      Values:         <uint32_t>
> > 
> >      The identifier of the Xen event channel used to signal activity
> >      in the ring buffer.
> > 
> > ring-ref
> >      Values:         <uint32_t>
> > 
> >      The Xen grant reference granting permission for the backend to map
> >      the sole page in a single page sized ring buffer.
> 
> [edit: Is this the command ring or data ring?]

This is the command ring, all other rings are setup from commands over
the one command ring. The commands that spawn data rings and the data
ring format will be described later.


> > #### Backend XenBus Nodes
> > 
> > version
> >      Values:         <uint32_t>
> > 
> >      Protocol version supported by the backend.
> 
> You may want to expand on that. Like:
> 
> Currently the value is 1. Or 101.

OK, make sense. I'll just go with 1.


> > max-dataring-page-order
> 
> dataring or data-ring
> 
> But why the 'dataring' in first place? Why not 'max-page-order' ?

Fair enough, I'll generalize it to max-page-order.


> >      Values:         <uint32_t>
> > 
> >      The maximum supported size of the data ring in units of lb(machine
> >      pages). (e.g. 0 == 1 page,  1 = 2 pages, 2 == 4 pages, etc.).
> > 
> > networking-calls
> >      Values:         <uint32_t>
> > 
> >      Value "1" means that the backend support networking calls.
> 
> s/support/supports/
> 
> What does 'networking calls' mean? Is that the 'networking function calls'
> you mentioned? Perhaps it may be better to say that
> 
> 1: supports the accept, bind, listen, and so on.
> 
> And then when you expand on that and have more, you can use
> the value 2 which will include the ones in 1 and extra.
> 
> But now what if you want to stuff other things besides networking
> function calls? What if you want to say stuff 'fork' or something
> else.
> 
> Would this be better be called 'function-calls' ?

Maybe I should call the XenStore node "function-calls" and the value "1"
should stands for accept, bind, listen, and so on (the current list).


> >      Value "0" means that it does not.
> 
> 
> I presume that the value of 0 means you can't really do much
> with this ring? 

Yeah, basically nothing today.


> > #### State Machine
> > 
> > Initialization:
> > 
> >     *Front*                               *Back*
> >     XenbusStateInitialising               XenbusStateInitialising
> >     - Query virtual device                - Query backend device
> >       properties.                           identification data.
> >     - Setup OS device instance.           - Publish backend features
> >     - Allocate and initialize the           and transport parameters
> >       request ring.                                      |
> >     - Publish transport parameters                       |
> >       that will be in effect during                      V
> >       this connection.                            XenbusStateInitWait
> >                  |
> >                  |
> >                  V
> >        XenbusStateInitialised
> > 
> >                                           - Query frontend transport 
> > parameters.
> >                                           - Connect to the request ring and
> >                                             event channel.
> >                                                          |
> >                                                          |
> >                                                          V
> >                                                  XenbusStateConnected
> > 
> >      - Query backend device properties.
> >      - Finalize OS virtual device
> >        instance.
> >                  |
> >                  |
> >                  V
> >         XenbusStateConnected
> > 
> > Once frontend and backend are connected, they have a shared page, which
> > will is used to exchange messages over a ring, and an event channel,
> > which is used to send notifications.
> 
> You probably want the Closed/Closing when a guest is migrated?

They are already described few lines below under Shutdown.


> Or what if the user / container does:
> 
> ifdown eth0; ifup eth0?
> 
> Or ifconfig eth0 <new ip>
> 
> Should that trigger then XenBusReconfiguring ?

All sockets go down with errors on ifdown eth0. When eth0 comes up
again, new socket connections will succeed again. I don't think we need
a special XenStore command for this.


> > 
> > Shutdown:
> > 
> >     *Front*                            *Back*
> >     XenbusStateConnected               XenbusStateConnected
> >                 |
> >                 |
> >                 V
> >        XenbusStateClosing
> > 
> >                                        - Unmap grants
> >                                        - Unbind evtchns
> >                                                  |
> >                                                  |
> >                                                  V
> >                                          XenbusStateClosing
> > 
> >     - Unbind evtchns
> >     - Free rings
> >     - Free data structures
> >                |
> >                |
> >                V
> >        XenbusStateClosed
> > 
> >                                        - Free remaining data structures
> >                                                  |
> >                                                  |
> >                                                  V
> >                                          XenbusStateClosed
> > 
> > 
> > ### Commands Ring
> > 
> > The shared ring is used by the frontend to forward POSIX function calls to 
> > the
> > backend. I'll refer to this ring as **commands ring** to distinguish it from
> 
> s/I'll/We shall/

OK


> > other rings which can be created later in the lifecycle of the protocol 
> > (data
> > rings). The ring format is defined using the familiar `DEFINE_RING_TYPES` 
> > macro
> 
> Data rings? What are those? If this is the command ring and there is
> a data ring - shouldn't you also have a XenBus entry for the 'normal' ring?

They'll be described later, I can add a link to them here. Introducing
data rings before explaining the command ring I think would make the
documents harder to understand.


> Shouldn't you have then a command-ring in the XenBus store?
> 
> I am going to assume that the 'ring-ref' is for the normal (command) ring.

Right


> > (`xen/include/public/io/ring.h`). Frontend requests are allocated on the 
> > ring
> > using the `RING_GET_REQUEST` macro.
> > 
> > The format is defined as follows:
> >     
> >     #define PVCALLS_SOCKET         0
> >     #define PVCALLS_CONNECT        1
> >     #define PVCALLS_RELEASE        2
> >     #define PVCALLS_BIND           3
> >     #define PVCALLS_LISTEN         4
> >     #define PVCALLS_ACCEPT         5
> >     #define PVCALLS_POLL           6
> 
> I would recommend sorting them by name, but perhaps it makes more sense
> to sort them in this order?

They are in calling order: it is easier to read the details of "socket"
before "accept", rather than the other way around.


> At the start of the document you also mentioned rcvmsg and sendmsg but
> they are not here. Should they be removed at the start of the design doc?

They are implemented as data transfers over the data ring (which hasn't
been described yet).


> Also, you may want to mention:
> 
> "**NOTE**: The `id` is a unique value, similar to a 'cookie'. See section 
> <b>blah</b>
> for details."
> 
> As it was not clear what the 'req_id' vs 'id' is.

'req_id' is a cookie, only valid for one request/response pair. 'id' is
the socket identifier and needs to be consistent across all calls
affecting the same socket. I'll add more info.


> > 
> >     struct xen_pvcalls_request {
> >             uint32_t req_id; /* private to guest, echoed in response */
> >             uint32_t cmd;    /* command to execute */
> >             union {
> >                     struct xen_pvcalls_socket {
> >                             uint64_t id;
> >                             uint32_t domain;
> >                             uint32_t type;
> >                             uint32_t protocol;
> >                     } socket;
> 
> For those that are not familiar with connect or such - would it
> make sense to point where you got say '28' from?

That's interesting because for most things one can just read The Open
Group documents on the correspondent POSIX calls, but The Open Group
specs don't cover any address formats, so I had to specify them myself
under "Socket families and address format" later in the document.

In other words, anything but the '28' can be found on
http://pubs.opengroup.org. The 28 in particular can only be found under
"Socket families and address format" and in `man bind` and `man ipv6`
(and of course in the glibc headers under /usr/include).


> >                     struct xen_pvcalls_connect {
> >                             uint64_t id;
> >                             uint8_t addr[28];
> >                             uint32_t len;
> >                             uint32_t flags;
> >                             grant_ref_t ref;
> 
> Hm, what is this for? Oooh, _THIS_ is what the dataring you spoke
> about is for! Perhaps? 

Yes indeed :-)
To be precise ref is the grant reference of the page that contains
indices and an array of grant references for the data ring.


> >                             uint32_t evtchn;
> >                     } connect;
> >                     struct xen_pvcalls_release {
> >                             uint64_t id;
> >                             uint8_t reuse;
> >                     } release;
> >                     struct xen_pvcalls_bind {
> >                             uint64_t id;
> >                             uint8_t addr[28];
> >                             uint32_t len;
> >                     } bind;
> >                     struct xen_pvcalls_listen {
> >                             uint64_t id;
> >                             uint32_t backlog;
> >                     } listen;
> >                     struct xen_pvcalls_accept {
> >                             uint64_t id;
> >                             uint64_t id_new;
> >                             grant_ref_t ref;
> >                             uint32_t evtchn;
> >                     } accept;
> >                     struct xen_pvcalls_poll {
> >                             uint64_t id;
> >                     } poll;
> >                     /* dummy member to force sizeof(struct 
> > xen_pvcalls_request) to match across archs */
> >                     struct xen_pvcalls_dummy {
> >                             uint8_t dummy[56];
> 
> What if some of the function calls you want to stuff in here are bigger than
> 56?
> 
> Say 128? Would this a different structure? And if so, would  
> `networking-calls`
> in the XenBus be say 2?

Right. Commands that require more than 56 bytes would need a new
'function-calls' (following your suggestion of renaming it) number or a
new 'version' number.


> >                     } dummy;
> >             } u;
> >     };
> > 
> > The first two fields are common for every command. Their binary layout
> > is:
> > 
> >     0       4       8
> >     +-------+-------+
> >     |req_id |  cmd  |
> >     +-------+-------+
> > 
> > - **req_id** is generated by the frontend and identifies one specific 
> > request
> > - **cmd** is the command requested by the frontend:
> > 
> >     - `PVCALLS_SOCKET`:  0
> >     - `PVCALLS_CONNECT`: 1
> >     - `PVCALLS_RELEASE`: 2
> >     - `PVCALLS_BIND`:    3
> >     - `PVCALLS_LISTEN`:  4
> >     - `PVCALLS_ACCEPT`:  5
> >     - `PVCALLS_POLL`:    6
> > 
> > Both fields are echoed back by the backend.
> > 
> > As for the other Xen ring based protocols, after writing a request to the 
> > ring,
> 
> s/As for the other/As the other/
> s/protcols/protocols behave/

OK


> > the frontend calls `RING_PUSH_REQUESTS_AND_CHECK_NOTIFY` and issues an event
> > channel notification when a notification is required.
> > 
> > Backend responses are allocated on the ring using the `RING_GET_RESPONSE` 
> > macro.
> > The format is the following:
> > 
> >     struct xen_pvcalls_response {
> >         uint32_t req_id;
> >         uint32_t cmd;
> >         int32_t ret;
> >         uint32_t pad;
> >         union {
> >                     struct _xen_pvcalls_socket {
> >                             uint64_t id;
> >                     } socket;
> >                     struct _xen_pvcalls_connect {
> >                             uint64_t id;
> >                     } connect;
> >                     struct _xen_pvcalls_release {
> >                             uint64_t id;
> >                     } release;
> >                     struct _xen_pvcalls_bind {
> >                             uint64_t id;
> >                     } bind;
> >                     struct _xen_pvcalls_listen {
> >                             uint64_t id;
> >                     } listen;
> >                     struct _xen_pvcalls_accept {
> >                             uint64_t id;
> >                     } accept;
> >                     struct _xen_pvcalls_poll {
> >                             uint64_t id;
> >                     } poll;
> >                     struct _xen_pvcalls_dummy {
> >                             uint8_t dummy[8];
> >                     } dummy;
> >             } u;
> >     };
> > 
> > The first four fields are common for every response. Their binary layout
> > is:
> > 
> >     0       4       8       12      16
> >     +-------+-------+-------+-------+
> >     |req_id |  cmd  |  ret  |  pad  |
> >     +-------+-------+-------+-------+
> > 
> > - **req_id**: echoed back from request
> > - **cmd**: echoed back from request
> > - **ret**: return value, identifies success (0) or failure (see error 
> > numbers
> >   below). If the **cmd** is not supported by the backend, ret is ENOTSUPP.
> 
> Surely EOPNOTSUPP ?
> 
> > - **pad**: padding
> > 
> > After calling `RING_PUSH_RESPONSES_AND_CHECK_NOTIFY`, the backend checks 
> > whether
> > it needs to notify the frontend and does so via event channel.
> > 
> > A description of each command, their additional request and response
> > fields follow.
> > 
> > 
> > #### Socket
> > 
> > The **socket** operation corresponds to the POSIX [socket][socket]
> > function. It creates a new socket of the specified family, type and
> > protocol. **id** is freely chosen by the frontend and references this
> > specific socket from this point forward. See "Socket families and
> > address format" below.
> > 
> > Request fields:
> > 
> > - **cmd** value: 0
> > - additional fields:
> >   - **id**: generated by the frontend, it identifies the new socket
> >   - **domain**: the communication domain
> >   - **type**: the socket type
> >   - **protocol**: the particular protocol to be used with the socket, 
> > usually 0
> > 
> > Request binary layout:
> > 
> >     8       12      16      20     24       28
> >     +-------+-------+-------+-------+-------+
> >     |       id      |domain | type  |protoco|
> >     +-------+-------+-------+-------+-------+
> > 
> > Response additional fields:
> > 
> > - **id**: echoed back from request
> > 
> > Response binary layout:
> > 
> >     16       20       24
> >     +-------+--------+
> >     |       id       |
> >     +-------+--------+
> > 
> > Return value:
> > 
> >   - 0 on success
> >   - See the [POSIX socket function][connect] for error names; the 
> > corresponding
> >     error numbers are specified later in this document.
> > 
> > #### Connect
> > 
> > The **connect** operation corresponds to the POSIX [connect][connect]
> > function. It connects a previously created socket (identified by **id**)
> > to the specified address.
> > 
> > The connect operation creates a new shared ring, which we'll call **data
> > ring**. The data ring is used to send and receive data from the socket.
> > The connect operation passes two additional parameters which are
> > utilized to setup the new ring: **evtchn** and **ref**. **evtchn** is the
> > port number of a new event channel which will be used for notifications
> > of activity on the data ring. **ref** is the grant reference of a page
> > which contains shared pointers to write and read data from the data ring
> 
> The 'to write and read data from the data ring' sounds wrong.

I'll improve the wording


> > and the full array of grant references for the ring buffers. It will be
> 
> I am actually not sure what you are saying. Is this 'data ring' which
> has 'shared pointers' pointing to an 'full array of grant references'?
>
> In other words - an two level tree? The 'data ring' is filled with
> 512 per page of grant references? And each of those grant references is
> used ?

I admit that the naming scheme is confusing. The page shared via the
"connect" command contains indices and grant refs for the actual data
ring.


> I will read on, it probably says it below...
> 
> > described in more detailed later. The data ring is unmapped and freed upon
> 
> s/detailed/detail/
> 
> > issuing a **release** command on the active socket identified by **id**.
> > 
> > When the frontend issues a **connect** command, the backend:
> > - finds its own internal socket corresponding to **id**
> > - connects the socket to **addr**
> > - maps the grant reference **ref**, the shared page contains the data
> >   ring interface (`struct pvcalls_data_intf`)
> > - maps all the grant references listed in `struct pvcalls_data_intf` and
> >   uses them as shared memory for the ring buffers
> > - bind the **evtchn**
> 
> bind the **evtch** to the internal socket?

No, I am referring to the regular EVTCHNOP_bind_interdomain operation.
**evtchn** is a regular event channel and it needs to be bound before it
can be used to send event notifications.


> > - replies to the frontend
> > 
> > The data ring format will be described in the following section.
> > 
> > Request fields:
> > 
> > - **cmd** value: 0
> > - additional fields:
> >   - **id**: identifies the socket
> >   - **addr**: address to connect to, see the address format section for more
> >     information
> >   - **len**: address length
> >   - **flags**: flags for the connection, reserved for future usage
> >   - **ref**: grant reference of the page containing `struct
> >     pvcalls_data_intf`
> >   - **evtchn**: port number of the evtchn to signal activity on the data 
> > ring
> > 
> > Request binary layout:
> > 
> >     8       12      16      20      24      28      32      36      40      
> > 44
> >     
> > +-------+-------+-------+-------+-------+-------+-------+-------+-------+
> >     |       id      |                            addr                       
> > |
> >     
> > +-------+-------+-------+-------+-------+-------+-------+-------+-------+
> >     | len   | flags |  ref  |evtchn |
> >     +-------+-------+-------+-------+
> > 
> > Response additional fields:
> > 
> > - **id**: echoed back from request
> > 
> > Response binary layout:
> > 
> >     16      20      24
> >     +-------+-------+
> >     |       id      |
> >     +-------+-------+
> > 
> > Return value:
> > 
> >   - 0 on success
> >   - See the [POSIX connect function][connect] for error names; the 
> > corresponding
> >     error numbers are specified later in this document.
> > 
> > #### Release
> > 
> > The **release** operation closes an existing active or a passive socket.
> > 
> > When a release command is issued on a passive socket, the backend
> > releases it and frees its internal mappings. When a release command is
> > issued for an active socket, the data ring is also unmapped and freed:
> > 
> > - frontend sends release command for an active socket
> > - backend releases the socket
> > - backend unmaps the data ring buffers
> > - backend unmaps the data ring interface
> > - backend unbinds the evtchn
> > - backend replies to frontend
> > - frontend frees ring and unbinds evtchn
> > 
> > Request fields:
> > 
> > - **cmd** value: 1
> > - additional fields:
> >   - **id**: identifies the socket
> >   - **reuse**: an optimization hint for the backend. The field is
> >     ignored for passive sockets. When set to 1, the frontend let the
> 
> s/let/lets/

Ok


> >     backend know that it will reuse exactly the same set of grant pages
> >     (interface and data ring) and evtchn when creating one of the next
> >     active sockets. The backend can take advantage of it by delaying
> >     unmapping grants and unbinding the evtchn. The backend is free to
> >     ignore the hint.
> 
> And how will the backend know this? Based on the **id** or on ring-ref?

Based on **id**. I'll write it down.


> > Request binary layout:
> > 
> >     8       12      16    17
> >     +-------+-------+-----+
> >     |       id      |reuse|
> >     +-------+-------+-----+
> > 
> > Response additional fields:
> > 
> > - **id**: echoed back from request
> > 
> > Response binary layout:
> > 
> >     16      20      24
> >     +-------+-------+
> >     |       id      |
> >     +-------+-------+
> > 
> > Return value:
> > 
> >   - 0 on success
> >   - See the [POSIX shutdown function][shutdown] for error names; the
> >     corresponding error numbers are specified later in this document.
> > 
> > #### Bind
> > 
> > The **bind** operation corresponds to the POSIX [bind][bind] function.
> > It assigns the address passed as parameter to a previously created
> > socket, identified by **id**. **Bind**, **listen** and **accept** are
> > the three operations required to have fully working passive sockets and
> > should be issued in this order.
> 
> AHAH! Which explains why your PV_CALLS are in that order. Perhaps mention
> that in the section that introduces the PV calls?

Yes, I'll mention it


> > 
> > Request fields:
> > 
> > - **cmd** value: 2
> > - additional fields:
> >   - **id**: identifies the socket
> >   - **addr**: address to connect to, see the address format section for more
> >     information
> >   - **len**: address length
> > 
> > Request binary layout:
> > 
> >     8       12      16      20      24      28      32      36      40      
> > 44
> >     
> > +-------+-------+-------+-------+-------+-------+-------+-------+-------+
> >     |       id      |                            addr                       
> > |
> >     
> > +-------+-------+-------+-------+-------+-------+-------+-------+-------+
> >     |  len  |
> >     +-------+
> > 
> > Response additional fields:
> > 
> > - **id**: echoed back from request
> > 
> > Response binary layout:
> > 
> >     16      20      24
> >     +-------+-------+
> >     |       id      |
> >     +-------+-------+
> > 
> > Return value:
> > 
> >   - 0 on success
> >   - See the [POSIX bind function][bind] for error names; the corresponding 
> > error
> >     numbers are specified later in this document.
> > 
> > 
> > #### Listen
> > 
> > The **listen** operation marks the socket as a passive socket. It 
> > corresponds to
> > the [POSIX listen function][listen].
> > 
> > Reuqest fields:
> > 
> > - **cmd** value: 3
> > - additional fields:
> >   - **id**: identifies the socket
> >   - **backlog**: the maximum length to which the queue of pending
> >     connections may grow
> > 
> > Request binary layout:
> > 
> >     8       12      16      20
> >     +-------+-------+-------+
> >     |       id      |backlog|
> >     +-------+-------+-------+
> > 
> > Response additional fields:
> > 
> > - **id**: echoed back from request
> > 
> > Response binary layout:
> > 
> >     16      20      24
> >     +-------+-------+
> >     |       id      |
> >     +-------+-------+
> > 
> > Return value:
> >   - 0 on success
> >   - See the [POSIX listen function][listen] for error names; the 
> > corresponding
> >     error numbers are specified later in this document.
> > 
> > 
> > #### Accept
> > 
> > The **accept** operation extracts the first connection request on the
> > queue of pending connections for the listening socket identified by
> > **id** and creates a new connected socket. The id of the new socket is
> > also chosen by the frontend and passed as an additional field of the
> > accept request struct (**id_new**). See the [POSIX accept function][accept]
> > as reference.
> > 
> > Similarly to the **connect** operation, **accept** creates a new data ring.
> > Information necessary to setup the new ring, such the grant table reference 
> > of
> > the page containing the data ring interface (`struct pvcalls_data_intf`) and
> > event channel port, are passed from the frontend to the backend as part of 
> > the
> > request.
> > 
> > The backend will reply to the request only when a new connection is 
> > successfully
> > accepted, i.e. the backend does not return EAGAIN or EWOULDBLOCK.
> > 
> > Example workflow:
> > 
> > - frontend issues an **accept** request
> > - backend waits for a connection to be available on the socket
> > - a new connection becomes available
> > - backend accepts the new connection
> > - backend creates an internal mapping from **id_new** to the new socket
> > - backend maps the grant reference **ref**, the shared page contains the
> >   data ring interface (`struct pvcalls_data_intf`)
> > - backend maps all the grant references listed in `struct
> >   pvcalls_data_intf` and uses them as shared memory for the new data
> >   ring
> > - backend binds the **evtchn**
> > - backend replies to the frontend
> > 
> > Request fields:
> > 
> > - **cmd** value: 4
> > - additional fields:
> >   - **id**: id of listening socket
> >   - **id_new**: id of the new socket
> >   - **ref**: grant reference of the data ring interface (`struct
> >     pvcalls_data_intf`)
> >   - **evtchn**: port number of the evtchn to signal activity on the data 
> > ring
> > 
> > Request binary layout:
> > 
> >     8       12      16      20      24      28      32
> >     +-------+-------+-------+-------+-------+-------+
> >     |       id      |    id_new     |  ref  |evtchn |
> >     +-------+-------+-------+-------+-------+-------+
> > 
> > Response additional fields:
> > 
> > - **id**: id of the listening socket, echoed back from request
> > 
> > Response binary layout:
> > 
> >     16      20      24
> >     +-------+-------+
> >     |       id      |
> >     +-------+-------+
> > 
> > Return value:
> > 
> >   - 0 on success
> >   - See the [POSIX accept function][accept] for error names; the 
> > corresponding
> >     error numbers are specified later in this document.
> > 
> > 
> > #### Poll
> > 
> > In this version of the protocol, the **poll** operation is only valid
> > for passive sockets. For active sockets, the frontend should look at the
> > state of the data ring. When a new connection is available in the queue
> > of the passive socket, the backend generates a response and notifies the
> > frontend.
> > 
> > Request fields:
> > 
> > - **cmd** value: 5
> > - additional fields:
> >   - **id**: identifies the listening socket
> > 
> > Request binary layout:
> > 
> >     8       12      16
> >     +-------+-------+
> >     |       id      |
> >     +-------+-------+
> > 
> > 
> > Response additional fields:
> > 
> > - **id**: echoed back from request
> > 
> > Response binary layout:
> > 
> >     16       20       24
> >     +--------+--------+
> >     |        id       |
> >     +--------+--------+
> > 
> > Return value:
> > 
> >   - 0 on success
> >   - See the [POSIX poll function][poll] for error names; the corresponding 
> > error
> >     numbers are specified later in this document.
> > 
> > #### Error numbers
> > 
> > The numbers corresponding to the error names specified by POSIX are:
> > 
> >     [EPERM]         -1
> >     [ENOENT]        -2
> >     [ESRCH]         -3
> >     [EINTR]         -4
> >     [EIO]           -5
> >     [ENXIO]         -6
> >     [E2BIG]         -7
> >     [ENOEXEC]       -8
> >     [EBADF]         -9
> >     [ECHILD]        -10
> >     [EAGAIN]        -11
> >     [EWOULDBLOCK]   -11
> >     [ENOMEM]        -12
> >     [EACCES]        -13
> >     [EFAULT]        -14
> >     [EBUSY]         -16
> >     [EEXIST]        -17
> >     [EXDEV]         -18
> >     [ENODEV]        -19
> >     [EISDIR]        -21
> >     [EINVAL]        -22
> >     [ENFILE]        -23
> >     [EMFILE]        -24
> >     [ENOSPC]        -28
> >     [EROFS]         -30
> >     [EMLINK]        -31
> >     [EDOM]          -33
> >     [ERANGE]        -34
> >     [EDEADLK]       -35
> >     [EDEADLOCK]     -35
> >     [ENAMETOOLONG]  -36
> >     [ENOLCK]        -37
> >     [ENOTEMPTY]     -39
> >     [ENOSYS]        -38
> >     [ENODATA]       -61
> >     [ETIME]         -62
> >     [EBADMSG]       -74
> >     [EOVERFLOW]     -75
> >     [EILSEQ]        -84
> >     [ERESTART]      -85
> >     [ENOTSOCK]      -88
> >     [EOPNOTSUPP]    -95
> >     [EAFNOSUPPORT]  -97
> >     [EADDRINUSE]    -98
> >     [EADDRNOTAVAIL] -99
> >     [ENOBUFS]       -105
> >     [EISCONN]       -106
> >     [ENOTCONN]      -107
> >     [ETIMEDOUT]     -110
> >     [ENOTSUPP]      -524
> 
> Oh, ENOTSUPP, EOPNOTSUPP, and other. Gosh that is quite the
> mess.

Yes, it is quite a mess. The Open Group defines:

[ENOTSUP]
    Not supported.
[EOPNOTSUPP]
    Operation not supported on socket.

and of course it also defines ENOSYS. As EOPNOTSUPP is socket specific,
I think that we should return ENOTSUP for unimplemented function calls
(or maybe ENOSYS).

The reason why I typed it as "ENOTSUPP" instead of "ENOTSUP", is that
the former is the one that is most used in the Linux kernel, but
actually I think it is a mistake. I'll rename it to ENOTSUP, which is
the proper Open Group error code.


> > #### Socket families and address format
> > 
> > The following definitions and explicit sizes, together with POSIX
> > [sys/socket.h][address] and [netinet/in.h][in] define socket families and
> > address format. Please be aware that only the **domain** `AF_INET`, **type**
> > `SOCK_STREAM` and **protocol** `0` are supported by this version of the
> > spec, others return ENOTSUPP.
> > 
> >     #define AF_UNSPEC   0
> >     #define AF_UNIX     1   /* Unix domain sockets      */
> >     #define AF_LOCAL    1   /* POSIX name for AF_UNIX   */
> >     #define AF_INET     2   /* Internet IP Protocol     */
> >     #define AF_INET6    10  /* IP version 6         */
> > 
> >     #define SOCK_STREAM 1
> >     #define SOCK_DGRAM  2
> >     #define SOCK_RAW    3
> > 
> >     /* generic address format */
> >     struct sockaddr {
> >         uint16_t sa_family_t;
> >         char sa_data[26];
> >     };
> > 
> >     struct in_addr {
> >         uint32_t s_addr;
> >     };
> > 
> >     /* AF_INET address format */
> >     struct sockaddr_in {
> >         uint16_t         sa_family_t;
> >         uint16_t         sin_port;
> >         struct in_addr   sin_addr;
> >         char             sin_zero[20];
> >     };
> > 
> > 
> > ### Data ring
> > 
> > Data rings are used for sending and receiving data over a connected socket. 
> > They
> > are created upon a successful **accept** or **connect** command.
> > 
> > A data ring is composed of two pieces: the interface and the **in** and 
> > **out**
> > buffers. The interface, represented by `struct pvcalls_ring_intf` is shared
> > first and resides on the page whose grant reference is passed by **accept** 
> > and
> > **connect** as parameter. `struct pvcalls_ring_intf` contains the list of 
> > grant
> > references which constitute the **in** and **out** data buffers.
> > 
> > #### Data ring interface
> > 
> >     struct pvcalls_data_intf {
> >             PVCALLS_RING_IDX in_cons, in_prod;
> 
> Do you want to add some spacing between them so that you don't
> use the same cacheline?

Let me get this straight. in_cons and in_prod should be on the same
cacheline.  Similarly out_cons and out_prod should be on the same
cacheline, but on a different cacheline compared to in_cons and in_prod.

So I should add padding between the two pairs of indices. Something
like:
            uint8_t pad[cache_line_size - 8];

Where cache_line_size is usually 64 bytes. So this would be:

            uint8_t pad[56];

Is that right?


> >             PVCALLS_RING_IDX out_cons, out_prod;
> >             int32_t in_error, out_error;
> >     
> >             uint32_t ring_order;
> >             grant_ref_t ref[];
> >     };
> 
> That does not look like it would be to the power of two? Perhaps
> some padding?

I don't understand the suggestion about the padding here.

Do you mean the number of grant references in the array might not be a
power of two? I gave it for granted but I guess I should write it down
as a requirement. I don't think I should explicitly size the "ref" array
because the frontend is free to choose any sizes up to the max (as long
as it is a power of two).



> >     /* not actually C compliant (ring_order changes from socket to socket) 
> > */
> >     struct pvcalls_data {
> >         char in[((1<<ring_order)<<PAGE_SHIFT)/2];
> >         char out[((1<<ring_order)<<PAGE_SHIFT)/2];
> >     };
> > 
> > - **ring_order**
> >   It represents the order of the data ring. The following list of grant
> >   references is of `(1 << ring_order)` elements. It cannot be greater than
> >   **max-dataring-page-order**, as specified by the backend on XenBus.
> > - **ref[]**
> >   The list of grant references which will contain the actual data. They are
> >   mapped contiguosly in virtual memory. The first half of the pages is the
> >   **in** array, the second half is the **out** array.
> > - **in** is an array used as circular buffer
> >   It contains data read from the socket. The producer is the backend, the
> >   consumer is the frontend.
> > - **out** is an array used as circular buffer
> >   It contains data to be written to the socket. The producer is the 
> > frontend,
> >   the consumer is the backend.
> > - **in_cons** and **in_prod**
> >   Consumer and producer pointers for data read from the socket. They keep 
> > track
> 
> s/pointers/indicies/ ?

Yes, good suggestion


> >   of how much data has already been consumed by the frontend from the **in**
> >   array. **in_prod** is increased by the backend, after writing data to 
> > **in**.
> >   **in_cons** is increased by the frontend, after reading data from **in**.
> > - **out_cons**, **out_prod**
> >   Consumer and producer pointers for the data to be written to the socket. 
> > They
> >   keep track of how much data has been written by the frontend to **out** 
> > and
> >   how much data has already been consumed by the backend. **out_prod** is
> >   increased by the frontend, after writing data to **out**. **out_cons** is
> >   increased by the backend, after reading data from **out**.
> > - **in_error** and **out_error** They signal errors when reading from the 
> > socket
> >   (**in_error**) or when writing to the socket (**out_error**). 0 means no
> >   errors. When an error occurs, no further reads or writes operations are
> >   performed on the socket. In the case of an orderly socket shutdown (i.e. 
> > read
> >   returns 0) **in_error** is set to ENOTCONN. **in_error** and **out_error**
> >   are never set to EAGAIN or EWOULDBLOCK.
> > 
> > The binary layout of `struct pvcalls_data_intf` follows:
> > 
> >     0         4         8         12        16        20        24        28
> >     +---------+---------+---------+---------+---------+---------+----------+
> >     | in_cons | in_prod |out_cons |out_prod |in_error |out_error|ring_order|
> >     +---------+---------+---------+---------+---------+---------+----------+
> > 
> >     28        32        36        40        4092     4096
> >     +---------+---------+---------+----//---+---------+
> >     |  ref[0] |  ref[1] |  ref[2] |         |  ref[N] |
> >     +---------+---------+---------+----//---+---------+
> 
> Perhaps you can say:
> 
> **N.B** For one page, the N would be 2034.

I can do that but actually N would be (4096-28)/4 = 1017. Ring refs are
4 bytes each.


> > 
> > The binary layout of the ring buffers follow:
> > 
> >     0         ((1<<ring_order)<<PAGE_SHIFT)/2       
> > ((1<<ring_order)<<PAGE_SHIFT)
> >     +------------//-------------+------------//-------------+
> >     |            in             |           out             |
> >     +------------//-------------+------------//-------------+
> > 
> > #### Workflow
> > 
> > The **in** and **out** arrays are used as circular buffers:
> >     
> >     0                               sizeof(array) == 
> > ((1<<ring_order)<<PAGE_SHIFT)/2
> >     +-----------------------------------+
> >     |to consume|    free    |to consume |
> >     +-----------------------------------+
> >                ^            ^
> >                prod         cons
> > 
> >     0                               sizeof(array)
> >     +-----------------------------------+
> >     |  free    | to consume |   free    |
> >     +-----------------------------------+
> >                ^            ^
> >                cons         prod
> > 
> > The following function is provided to calculate how many bytes are currently
> > left unconsumed in an array:
> > 
> >     #define _MASK_PVCALLS_IDX(idx, ring_size) ((idx) & (ring_size-1))
> > 
> >     static inline PVCALLS_RING_IDX pvcalls_ring_queued(PVCALLS_RING_IDX 
> > prod,
> 
> s/queued/unconsumed/?

Fair enough, I think unconsumed might be clearer.


> >                     PVCALLS_RING_IDX cons,
> >                     PVCALLS_RING_IDX ring_size)
> >     {
> >             PVCALLS_RING_IDX size;
> >     
> >             if (prod == cons)
> >                     return 0;
> >     
> >             prod = _MASK_PVCALLS_IDX(prod, ring_size);
> >             cons = _MASK_PVCALLS_IDX(cons, ring_size);
> >     
> >             if (prod == cons)
> >                     return ring_size;
> >     
> >             if (prod > cons)
> >                     size = prod - cons;
> >             else {
> >                     size = ring_size - cons;
> >                     size += prod;
> >             }
> >             return size;
> >     }
> > 
> > The producer (the backend for **in**, the frontend for **out**) writes to 
> > the
> > array in the following way:
> > 
> > - read *cons*, *prod*, *error* from shared memory
> > - memory barrier
> > - return on *error*
> > - write to array at position *prod* up to *cons*, wrapping around the 
> > circular
> >   buffer when necessary
> > - memory barrier
> > - increase *prod*
> > - notify the other end via evtchn
> > 
> > The consumer (the backend for **out**, the frontend for **in**) reads from 
> > the
> > array in the following way:
> > 
> > - read *prod*, *cons*, *error* from shared memory
> > - memory barrier
> > - return on *error*
> > - read from array at position *cons* up to *prod*, wrapping around the 
> > circular
> >   buffer when necessary
> > - memory barrier
> > - increase *cons*
> > - notify the other end via evtchn
> > 
> > The producer takes care of writing only as many bytes as available in the 
> > buffer
> > up to *cons*. The consumer takes care of reading only as many bytes as 
> > available
> > in the buffer up to *prod*. *error* is set by the backend when an error 
> > occurs
> > writing or reading from the socket.
> > 
> > 
> > [address]: http://pubs.opengroup.org/onlinepubs/7908799/xns/syssocket.h.html
> > [in]: 
> > http://pubs.opengroup.org/onlinepubs/000095399/basedefs/netinet/in.h.html
> > [socket]: 
> > http://pubs.opengroup.org/onlinepubs/009695399/functions/socket.html
> > [connect]: http://pubs.opengroup.org/onlinepubs/7908799/xns/connect.html
> > [shutdown]: http://pubs.opengroup.org/onlinepubs/7908799/xns/shutdown.html
> > [bind]: http://pubs.opengroup.org/onlinepubs/7908799/xns/bind.html
> > [listen]: http://pubs.opengroup.org/onlinepubs/7908799/xns/listen.html
> > [accept]: http://pubs.opengroup.org/onlinepubs/7908799/xns/accept.html
> > [poll]: http://pubs.opengroup.org/onlinepubs/7908799/xsh/poll.html
> 
> 
> I think your patch below needs also some documentation (in xl.cfg) or such?

Yes indeed, I'll add it.


> > diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
> > index c39d745..d784a10 100644
> > --- a/tools/libxl/libxl.c
> > +++ b/tools/libxl/libxl.c
> > @@ -2299,6 +2299,70 @@ int libxl_devid_to_device_vtpm(libxl_ctx *ctx,
> >      return rc;
> >  }
> >  
> > +/******************************************************************************/
> > +
> > +int libxl__device_pvcalls_setdefault(libxl__gc *gc, libxl_device_pvcalls 
> > *pvcalls)
> > +{
> > +    int rc;
> > +
> > +    rc = libxl__resolve_domid(gc, pvcalls->backend_domname, 
> > &pvcalls->backend_domid);
> > +    return rc;
> > +}
> > +
> > +static int libxl__device_from_pvcalls(libxl__gc *gc, uint32_t domid,
> > +                                   libxl_device_pvcalls *pvcalls,
> > +                                   libxl__device *device)
> > +{
> > +   device->backend_devid   = pvcalls->devid;
> > +   device->backend_domid   = pvcalls->backend_domid;
> > +   device->backend_kind    = LIBXL__DEVICE_KIND_PVCALLS;
> > +   device->devid           = pvcalls->devid;
> > +   device->domid           = domid;
> > +   device->kind            = LIBXL__DEVICE_KIND_PVCALLS;
> > +
> > +   return 0;
> > +}
> > +
> > +
> > +int libxl__device_pvcalls_add(libxl__gc *gc, uint32_t domid,
> > +                           libxl_device_pvcalls *pvcalls)
> > +{
> > +    flexarray_t *front;
> > +    flexarray_t *back;
> > +    libxl__device device;
> > +    int rc;
> > +
> > +    rc = libxl__device_pvcalls_setdefault(gc, pvcalls);
> > +    if (rc) goto out;
> > +
> > +    front = flexarray_make(gc, 16, 1);
> > +    back = flexarray_make(gc, 16, 1);
> > +
> > +    if (pvcalls->devid == -1) {
> > +        if ((pvcalls->devid = libxl__device_nextid(gc, domid, "pvcalls")) 
> > < 0) {
> > +            rc = ERROR_FAIL;
> > +            goto out;
> > +        }
> > +    }
> > +
> > +    rc = libxl__device_from_pvcalls(gc, domid, pvcalls, &device);
> > +    if (rc != 0) goto out;
> > +
> > +    flexarray_append_pair(back, "frontend-id", libxl__sprintf(gc, "%d", 
> > domid));
> > +    flexarray_append_pair(back, "online", "1");
> > +    flexarray_append_pair(back, "state", GCSPRINTF("%d", 
> > XenbusStateInitialising));
> > +    flexarray_append_pair(front, "backend-id",
> > +                          libxl__sprintf(gc, "%d", 
> > pvcalls->backend_domid));
> > +    flexarray_append_pair(front, "state", GCSPRINTF("%d", 
> > XenbusStateInitialising));
> > +
> > +    libxl__device_generic_add(gc, XBT_NULL, &device,
> > +                              libxl__xs_kvs_of_flexarray(gc, back, 
> > back->count),
> > +                              libxl__xs_kvs_of_flexarray(gc, front, 
> > front->count),
> > +                              NULL);
> > +    rc = 0;
> > +out:
> > +    return rc;
> > +}
> >  
> >  
> > /******************************************************************************/
> >  
> > @@ -4250,6 +4314,8 @@ out:
> >   * libxl_device_vfb_destroy
> >   * libxl_device_usbctrl_remove
> >   * libxl_device_usbctrl_destroy
> > + * libxl_device_pvcalls_remove
> > + * libxl_device_pvcalls_destroy
> >   */
> >  #define DEFINE_DEVICE_REMOVE_EXT(type, remtype, removedestroy, f)        \
> >      int libxl_device_##type##_##removedestroy(libxl_ctx *ctx,           \
> > @@ -4311,6 +4377,11 @@ DEFINE_DEVICE_REMOVE(vtpm, destroy, 1)
> >  DEFINE_DEVICE_REMOVE_CUSTOM(usbctrl, remove, 0)
> >  DEFINE_DEVICE_REMOVE_CUSTOM(usbctrl, destroy, 1)
> >  
> > +/* pvcalls */
> > +
> > +DEFINE_DEVICE_REMOVE(pvcalls, remove, 0)
> > +DEFINE_DEVICE_REMOVE(pvcalls, destroy, 1)
> > +
> >  /* channel/console hotunplug is not implemented. There are 2 possibilities:
> >   * 1. add support for secondary consoles to xenconsoled
> >   * 2. dynamically add/remove qemu chardevs via qmp messages. */
> > diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
> > index 2c0f868..9358071 100644
> > --- a/tools/libxl/libxl.h
> > +++ b/tools/libxl/libxl.h
> > @@ -1753,6 +1753,16 @@ int libxl_device_vfb_destroy(libxl_ctx *ctx, 
> > uint32_t domid,
> >                               const libxl_asyncop_how *ao_how)
> >                               LIBXL_EXTERNAL_CALLERS_ONLY;
> >  
> > +/* pvcalls */
> > +int libxl_device_pvcalls_remove(libxl_ctx *ctx, uint32_t domid,
> > +                            libxl_device_pvcalls *pvcalls,
> > +                            const libxl_asyncop_how *ao_how)
> > +                             LIBXL_EXTERNAL_CALLERS_ONLY;
> > +int libxl_device_pvcalls_destroy(libxl_ctx *ctx, uint32_t domid,
> > +                             libxl_device_pvcalls *pvcalls,
> > +                             const libxl_asyncop_how *ao_how)
> > +                             LIBXL_EXTERNAL_CALLERS_ONLY;
> > +
> >  /* PCI Passthrough */
> >  int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
> >                           libxl_device_pci *pcidev,
> > diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> > index 5000bd0..f019c37 100644
> > --- a/tools/libxl/libxl_create.c
> > +++ b/tools/libxl/libxl_create.c
> > @@ -1374,6 +1374,8 @@ static void domcreate_launch_dm(libxl__egc *egc, 
> > libxl__multidev *multidev,
> >              libxl__device_vfb_add(gc, domid, &d_config->vfbs[i]);
> >              libxl__device_vkb_add(gc, domid, &d_config->vkbs[i]);
> >          }
> > +        for (i = 0; i < d_config->num_pvcallss; i++)
> > +            libxl__device_pvcalls_add(gc, domid, &d_config->pvcallss[i]);
> >  
> >          init_console_info(gc, &console, 0);
> >          console.backend_domid = state->console_domid;
> > diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
> > index c791418..063d926 100644
> > --- a/tools/libxl/libxl_internal.h
> > +++ b/tools/libxl/libxl_internal.h
> > @@ -1224,6 +1224,7 @@ _hidden int libxl__device_vkb_setdefault(libxl__gc 
> > *gc, libxl_device_vkb *vkb);
> >  _hidden int libxl__device_pci_setdefault(libxl__gc *gc, libxl_device_pci 
> > *pci);
> >  _hidden void libxl__rdm_setdefault(libxl__gc *gc,
> >                                     libxl_domain_build_info *b_info);
> > +_hidden int libxl__device_pvcalls_setdefault(libxl__gc *gc, 
> > libxl_device_pvcalls *pvcalls);
> >  
> >  _hidden const char *libxl__device_nic_devname(libxl__gc *gc,
> >                                                uint32_t domid,
> > @@ -2647,6 +2648,10 @@ _hidden int libxl__device_vkb_add(libxl__gc *gc, 
> > uint32_t domid,
> >  _hidden int libxl__device_vfb_add(libxl__gc *gc, uint32_t domid,
> >                                    libxl_device_vfb *vfb);
> >  
> > +/* Internal function to connect a pvcalls device */
> > +_hidden int libxl__device_pvcalls_add(libxl__gc *gc, uint32_t domid,
> > +                                  libxl_device_pvcalls *pvcalls);
> > +
> >  /* Waits for the passed device to reach state XenbusStateInitWait.
> >   * This is not really useful by itself, but is important when executing
> >   * hotplug scripts, since we need to be sure the device is in the correct
> > diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> > index 9840f3b..a99c7be 100644
> > --- a/tools/libxl/libxl_types.idl
> > +++ b/tools/libxl/libxl_types.idl
> > @@ -685,6 +685,12 @@ libxl_device_vtpm = Struct("device_vtpm", [
> >      ("uuid",             libxl_uuid),
> >  ])
> >  
> > +libxl_device_pvcalls = Struct("device_pvcalls", [
> > +    ("backend_domid",    libxl_domid),
> > +    ("backend_domname",  string),
> > +    ("devid",            libxl_devid),
> > +])
> > +
> >  libxl_device_channel = Struct("device_channel", [
> >      ("backend_domid", libxl_domid),
> >      ("backend_domname", string),
> > @@ -709,6 +715,7 @@ libxl_domain_config = Struct("domain_config", [
> >      ("vfbs", Array(libxl_device_vfb, "num_vfbs")),
> >      ("vkbs", Array(libxl_device_vkb, "num_vkbs")),
> >      ("vtpms", Array(libxl_device_vtpm, "num_vtpms")),
> > +    ("pvcallss", Array(libxl_device_pvcalls, "num_pvcallss")),
> 
> 
> I thought you said there can only be on PV calls per guest?

Yes, you are right. It doesn't make sense to allow more than one. I'll
fix the code.


> >      # a channel manifests as a console with a name,
> >      # see docs/misc/channels.txt
> >      ("channels", Array(libxl_device_channel, "num_channels")),
> > diff --git a/tools/libxl/libxl_types_internal.idl 
> > b/tools/libxl/libxl_types_internal.idl
> > index 177f9b7..b41122b 100644
> > --- a/tools/libxl/libxl_types_internal.idl
> > +++ b/tools/libxl/libxl_types_internal.idl
> > @@ -24,6 +24,7 @@ libxl__device_kind = Enumeration("device_kind", [
> >      (8, "VTPM"),
> >      (9, "VUSB"),
> >      (10, "QUSB"),
> > +    (11, "PVCALLS"),
> >      ])
> >  
> >  libxl__console_backend = Enumeration("console_backend", [
> > diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> > index 03ab644..23f9793 100644
> > --- a/tools/libxl/xl_cmdimpl.c
> > +++ b/tools/libxl/xl_cmdimpl.c
> > @@ -1898,6 +1898,18 @@ static void parse_config_data(const char 
> > *config_source,
> >              free(buf2);
> >          }
> >      }
> > +    
> > +    if (!xlu_cfg_get_long(config, "pvcalls", &l, 0)) {
> > +        libxl_device_pvcalls *pvcalls;
> > +        fprintf(stderr, "Creating pvcalls l=%lu\n", l);
> 
> Little devbug code :-)

:-)


> > +        d_config->num_pvcallss = 0;
> > +        d_config->pvcallss = NULL;
> > +        pvcalls = ARRAY_EXTEND_INIT(d_config->pvcallss,
> > +                d_config->num_pvcallss,
> > +                libxl_device_pvcalls_init);
> > +        libxl_device_pvcalls_init(pvcalls);
> > +        replace_string(&pvcalls->backend_domname, "0");
> > +    }
> >  
> >      if (!xlu_cfg_get_list (config, "channel", &channels, 0, 0)) {
> >          d_config->num_channels = 0;
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.