[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3] displif: add ABI for para-virtual display

On 02/15/2017 11:33 PM, Konrad Rzeszutek Wilk wrote:
I will define 2 sections:
    *------------------ Connector Request Transport Parameters
    * ctrl-event-channel
    * ctrl-ring-ref
    *------------------- Connector Event Transport Parameters
    * event-channel
    * event-ring-ref

Or is the other ring buffer the one that is created via 'gref_directory' ?
At the bottom:
    * In order to deliver asynchronous events from back to front a shared page
    * allocated by front and its gref propagated to back via XenStore entries
    * (event-XXX).
AAnd you may want to say this is guarded by REQ_ALLOC feature right?
Not sure I understood you. Event path is totally independent
from any feature, e.g. REQ_ALLOC.
It just provides means to send async events
from back to front, "page flip done" in my case.
<scratche his head> Why do you need a seperate ring to send
responses back? Why not use the same ring on which requests
were sent
Ok, it seems we are not on the same page for rings/channels usage.
Let me describe how those are used:

1. Command/control event channel and its corresponding ring are used
to pass requests from front to back (XENDISPL_OP_XXX) and get responses
from the back. These are synchronous, use macros from ring.h:
ctrl-event-channel + ctrl-ring-ref
I call them "ctrl-" because this way front controls back, or sends commands
if you will. Maybe "cmd-" would fit better here?

2. Event channel - asynchronous path for the backend to signal activity
to the frontend, currently used for "page flip done" event which is sent
at some point of time after back has actually completed the page flip
(so, before that the corresponding request was sent and response received,
operation didn't complete yet, instead it was scheduled)
No macros exist for this use-case in ring.h (kbdif+fbif implement
this on their own, so do I)
These are:  event-channel + event-ring-ref
Probably here is the point from where confusion comes, naming.
We can have something like "be-to-fe-event-channel" or anything else
more cute and descriptive.

Hope this explains the need for 2 paths

So this is like the network where there is an 'rx' and 'tx'!
kind of

Now I get it.
sorry, I was probably not clear

In that case why not just prefix it with 'in' and 'out'? Such as:

'out-ring-ref' and 'out-event-channel' and 'in-ring-ref' along
with 'in-event-channel'.
hmmmm, it may confuse, because you must know "out"
from which POV, e.g. frontend's or backend's.
What is more, these "out-" and "in-" are... nameless?
Can we still have something like "ctrl-"/"cmd-"/"req-"
for the req/resp path and probably "evt-" for
events from back to front?
Or perhaps better - borrow the same idea that Stefano came up for
9pfs and PV calls - where his ring does both.

Then you just need 'ring-ref', 'event-channel', 'max-page-ring-order'
(which must be 1 or larger).

And you split the ring-ref in two - one for 'in' events and the other
part for 'out' events?
yes, I saw current implementations (kbdif, fbif) and
what Stefano did, but would rather stick to what is currently
defined (I believe it is optimal as is)
And hope, that maybe someone will put new functionality into ring.h
to serve async events one day :)
..giant snip..

Thus, I was thinking of XenbusStateReconfiguringstate as appropriate in this
Right, but somebody has to move to this state. Who would do it?
when backend dies its state changes to "closed".
At this moment front tries to remove virtualized device
and if it is possible/done, then it goes into "initialized"
state. If not - "reconfiguring".
So, you would ask how does the front go from "reconfiguring"
into "initialized" state? This is OS/front specific, but:
1. the underlying framework, e.g. DRM/KMS, ALSA, provide
a callback(s) to signal that the last client to the
virtualized device has gone and the driver can be removed
(equivalent to module's usage counter 0)
2. one can schedule a delayed work (timer/tasklet/workqueue)
to periodically check if this is the right time to re-try
the removal and remove

In both cases, after the virtualized device has been removed we move
into "initialized" state again and are ready for new connections
with backend (if it arose from the dead :)
Would the
frontend have some form of timer to make sure that the backend is still
alive? And if it had died then move to Reconfiguring?
There are at least 2 ways to understand if back is dead:
1. XenBus state change (back is closed)
.. If the backend does a nice shutdown..
hm, on Linux I can kill -9 backend and XenBus driver seems
to be able to turn back's state into "closed"
isn't this expected behavior?
That is the expected behavior. I was thinking more of a backend
being a guest - and the guest completly going away and nobody
clearing its XenStore keys.

In which case your second option of doing a timeout will work.
But you may need an 'PING' type request to figure this out?
no ping, just usual calls, one of which will fail
ok, so this is what I have for the recovery flow now:

*------------------------------- Recovery flow -------------------------------
 * In case of frontend unrecoverable errors backend handles that as
 * if frontend goes into the XenbusStateClosed state.
 * In case of backend unrecoverable errors frontend tries removing
 * the virtualized device. If this is possible at the moment of error,
* then frontend goes into the XenbusStateInitialising state and is ready for * new connection with backend. If the virtualized device is still in use and * cannot be removed, then frontend goes into the XenbusStateReconfiguring state
 * until either the virtualized device removed or backend initiates a new
 * connection. On the virtualized device removal frontend goes into the
 * XenbusStateInitialising state.
 * Note on XenbusStateReconfiguring state of the frontend: if backend has
 * unrecoverable errors then frontend cannot send requests to the backend
 * and thus cannot provide functionality of the virtualized device anymore.
* After backend is back to normal the virtualized device may still hold some * state: configuration in use, allocated buffers, client application state etc. * So, in most cases, this will require frontend to implement complex recovery
 * reconnect logic. Instead, by going into XenbusStateReconfiguring state,
 * frontend will make sure no new clients of the virtualized device are
 * accepted, allow existing client(s) to exit gracefully by signaling error
 * state etc.
 * Once all the clients are gone frontend can reinitialize the virtualized
 * device and get into XenbusStateInitialising state again signaling the
 * backend that a new connection can be made.
 * There are multiple conditions possible under which frontend will go from
* XenbusStateReconfiguring into XenbusStateInitialising, some of them are OS
 * specific. For example:
* 1. The underlying OS framework may provide callbacks to signal that the last * client of the virtualized device has gone and the device can be removed
 * 2. Frontend can schedule a deferred work (timer/tasklet/workqueue)
 *    to periodically check if this is the right time to re-try removal of
 *    the virtualized device.
 * 3. By any other means.

I would also like to re-use this as is for sndif, for that reason I do not use
"virtualized display" here, but keep it nameless, e.g. "virtualized device"

I am attaching the diff between v3 and v4 for your convenience

Thank you,

Attachment: v4.diff
Description: Text Data

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.