[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 05/13] xen/pvcalls: implement bind command
>> This all looks very similar to previous patches. Can it be factored out? > You are right that the pattern is the same for all commands: > - get a request > - fill the request > - possibly do something else > - wait > however each request is different, the struct and fields are different. > There are spin_lock and spin_unlock calls intermingled. I am not sure I > can factor out much of this. Maybe I could create a static inline or > macro as a syntactic sugar to replace the wait call, but that's pretty > much it I think. Maybe you could factor out common fragments, not necessarily the whole thing at once? For example, static inline int get_request(*bedata, int *req_id) { *req_id = bedata->ring.req_prod_pvt & (RING_SIZE(&bedata->ring) - 1); if (RING_FULL(&bedata->ring) || READ_ONCE(bedata->rsp[*req_id].req_id) != PVCALLS_INVALID_ID) { return -EAGAIN; return 0; } (or some such) > > >> Also, you've used wait_event_interruptible in socket() implementation. Why >> not >> here (and connect())? > My intention was to use wait_event to wait for replies everywhere but I > missed some of them in the conversion (I used to use > wait_event_interruptible in early versions of the code). > > The reason to use wait_event is that it makes it easier to handle the > rsp slot in bedata (bedata->rsp[req_id]): in case of EINTR the response > in bedata->rsp would not be cleared by anybody. If we use wait_event > there is no such problem, and the backend could still return EINTR and > we would handle it just fine as any other responses. I was actually wondering about this myself when I was looking at socket() but then I somehow convinced myself (incorrectly!) that it was OK. -boris _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |