[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [DOC v8] PV Calls protocol design



.snip..
> > > Request fields:
> > > 
> > > - **cmd** value: 0
> > > - additional fields:
> > >   - **id**: identifies the socket
> > >   - **addr**: address to connect to, see [Socket families and address 
> > > format]
> > 
> > 
> > Hm, so what do we do if we want to support AF_UNIX which has an addr of
> > 108 bytes?
> 
> We write a protocol extension and bump the protocol version. However, we

Right. How would you change the protocol for this?

I not asking to have this in this protocol but I just want us to think
of what we could do so that if somebody was to implement this - how
could we make this easier for this?

My initial thought was to spread the request over two "old" structures.
And if so .. would it make sense to include an extra flag or such?

> could make the addr array size larger now to be more future proof, but
> it takes up memory and I have no use for it, given that we can use
> loopback for the same purpose.
> 

..snip..
> > > #### Indexes Page Structure
> > > 
> > >     typedef uint32_t PVCALLS_RING_IDX;
> > > 
> > >     struct pvcalls_data_intf {
> > >           PVCALLS_RING_IDX in_cons, in_prod;
> > >           int32_t in_error;
> > 
> > You don't want to perhaps include in_event?
> > > 
> > >           uint8_t pad[52];
> > > 
> > >           PVCALLS_RING_IDX out_cons, out_prod;
> > >           int32_t out_error;
> > 
> > And out_event as way to do some form of interrupt mitigation
> > (similar to what you had proposed?)
> 
> Yes, the in_event / out_event optimization that I wrote for the 9pfs
> protocol could work here too. However, I thought you preferred to remove
> it for now as it is not required and increases complexity?

I did. But I am coming to it by looking at the ring.h header.

My recollection was that your optimization was a bit different than
what ring.h has.

> 
> We could always add it later, if we reserved some padding here for it.
> Something like:
> 
>    struct pvcalls_data_intf {
>       PVCALLS_RING_IDX in_cons, in_prod;
>       int32_t in_error;
> 
>       uint8_t pad[52];
> 
>       PVCALLS_RING_IDX out_cons, out_prod;
>       int32_t out_error;
> 
>       uint8_t pad[52]; <--- this is new
> 
>       uint32_t ring_order;
>       grant_ref_t ref[];
>    };
> 
> We have plenty of space for the grant refs anyway. This way, we can
> introduce in_event and out_event by eating up 4 bytes from each pad
> array.

That is true.
> 
> 
> > > 
> > >           uint32_t ring_order;
> > >           grant_ref_t ref[];
> > >     };
> > > 
> > >     /* not actually C compliant (ring_order changes from socket to 
> > > socket) */
> > >     struct pvcalls_data {
> > >         char in[((1<<ring_order)<<PAGE_SHIFT)/2];
> > >         char out[((1<<ring_order)<<PAGE_SHIFT)/2];
> > >     };
> > > 
> > > - **ring_order**
> > >   It represents the order of the data ring. The following list of grant
> > >   references is of `(1 << ring_order)` elements. It cannot be greater than
> > >   **max-page-order**, as specified by the backend on XenBus. It has to
> > >   be one at minimum.
> > 
> > Oh? Why not zero? (4KB) as the 'max-page-order' has an example of zero 
> > order?
> > Perhaps if it MUST be one or more then the 'max-page-order' should say
> > that at least it MUST be one?
> 
> So that each in and out array gets to have its own dedicated page,
> although I don't think it's strictly necessary. With zero, they would
> get half a page each.

That is fine. Just pls document 'max-page-order' to make it clear it MUST
be 1 or higher.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.