|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [DOC v8] PV Calls protocol design
On Fri, Feb 10, 2017 at 12:09:36PM -0800, Stefano Stabellini wrote:
> On Fri, 10 Feb 2017, Konrad Rzeszutek Wilk wrote:
> > .snip..
> > > > > Request fields:
> > > > >
> > > > > - **cmd** value: 0
> > > > > - additional fields:
> > > > > - **id**: identifies the socket
> > > > > - **addr**: address to connect to, see [Socket families and address
> > > > > format]
> > > >
> > > >
> > > > Hm, so what do we do if we want to support AF_UNIX which has an addr of
> > > > 108 bytes?
> > >
> > > We write a protocol extension and bump the protocol version. However, we
> >
> > Right. How would you change the protocol for this?
> >
> > I not asking to have this in this protocol but I just want us to think
> > of what we could do so that if somebody was to implement this - how
> > could we make this easier for this?
> >
> > My initial thought was to spread the request over two "old" structures.
> > And if so .. would it make sense to include an extra flag or such?
>
> That's a possibility, but I don't think we need an extra flag. It would
> be easier to introduce a new command, such as PVCALLS_CONNECT_EXTENDED
> or PVCALLS_CONNECT_V2, with the appropriate flags to say that it will
> make use of two request slots instead of one.
Fair enough. Perhaps include a section in the document about how one
could expand the protocol and include this? That would make it easier
for folks to follow an 'paved' way?
>
>
> > > could make the addr array size larger now to be more future proof, but
> > > it takes up memory and I have no use for it, given that we can use
> > > loopback for the same purpose.
> > >
> >
> > ..snip..
> > > > > #### Indexes Page Structure
> > > > >
> > > > > typedef uint32_t PVCALLS_RING_IDX;
> > > > >
> > > > > struct pvcalls_data_intf {
> > > > > PVCALLS_RING_IDX in_cons, in_prod;
> > > > > int32_t in_error;
> > > >
> > > > You don't want to perhaps include in_event?
> > > > >
> > > > > uint8_t pad[52];
> > > > >
> > > > > PVCALLS_RING_IDX out_cons, out_prod;
> > > > > int32_t out_error;
> > > >
> > > > And out_event as way to do some form of interrupt mitigation
> > > > (similar to what you had proposed?)
> > >
> > > Yes, the in_event / out_event optimization that I wrote for the 9pfs
> > > protocol could work here too. However, I thought you preferred to remove
> > > it for now as it is not required and increases complexity?
> >
> > I did. But I am coming to it by looking at the ring.h header.
> >
> > My recollection was that your optimization was a bit different than
> > what ring.h has.
>
> Right. They are similar, but different because in this protocol we have
> two rings: the `in' ring and the `out' ring. Each ring is
> mono-directional and there is no static request size: the producer
> writes opaque data to the ring. In ring.h they are combined together and
> the request size is static and well-known. In PVCalls:
>
> in -> backend to frontend only
> out-> frontend to backend only
>
> Let talk about the `in' ring, where the frontend is the consumer
> and the backend is the producer. Everything is the same but mirrored for
> the `out' ring.
>
> The producer doesn't need any notifications unless the ring is full.
> The producer, the backend in this case, never reads from the `in' ring.
> Thus, I disabled notifications to the producer by default and added an
> in_event field for the producer to ask for notifications only when
> necessary, that is when the ring is full.
>
> On the other end, the consumer always require notifications, unless the
> consumer is already actively reading from the ring. The
> producer could figure it out without any additional fields in the
> protocol. It can simply compare the indexes at the beginning and at the
> end of the function, that's similar to what the ring protocol does.
I like your description! Could you include this in a section
titled 'Why ring.h macros are not needed.' please?
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |