[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] xen/evtchn: Add design for static event channel signaling for domUs..
On Thu, 14 Apr 2022, Bertrand Marquis wrote: > > On 14 Apr 2022, at 02:14, Stefano Stabellini <sstabellini@xxxxxxxxxx> wrote: > > > > On Mon, 11 Apr 2022, Bertrand Marquis wrote: > >> What you mention here is actually combining 2 different solutions inside > >> Xen to build a custom communication solution. > >> My assumption here is that the user will actually create the device tree > >> nodes he wants to do that and we should not create guest node entries > >> as it would enforce some design. > >> > >> If everything can be statically defined for Xen then the user can also > >> statically define node entries inside his guest to make use of the events > >> and the shared memories. > >> > >> For example one might need more than one event to build a communication > >> system, or more than one shared memory or could build something > >> communicating with multiple guest thus requiring even more events and > >> shared memories. > > > > Hi Bertrand, Rahul, > > > > If the guests are allowed some level of dynamic discovery, this feature > > is not needed. They can discover the shared memory location from the > > domU device tree, then proceed to allocate evtchns as needed and tell > > the other end the evtchn numbers over shared memory. I already have an > > example of it here: > > > > https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/2251030537/Xen+Shared+Memory+and+Interrupts+Between+VMs > > > > What if the guest doesn't support device tree at runtime, like baremetal > > or Zephyr? The shared memory address can be hardcoded or generated from > > device tree at build time. That's no problem. Then, the event channels > > can still be allocated at runtime and passed to the other end over > > shared memory. That's what the example on the wikipage does. > > > > > > When are static event channels actually useful? When the application > > cannot allocate the event channels at runtime at all. The reason for the > > restriction could be related to safety (no dynamic allocations at > > runtime) or convenience (everything else is fully static, why should the > > event channel numbers be dynamic?) > > An other use case here is dom0less: you cannot have dom0 create them. > > > > > Given the above, I can see why there is no need to describe the static > > event channel info in the domU device tree: static event channels are > > only useful in fully static configurations, and in those configurations > > the domU device tree dynamically generated by Xen is not needed. I can > > see where you are coming from. > > > > > > The workflow that we have been trying to enable with the System Device > > Tree effort (System Device Tree is similar to a normal Device Tree plus > > the xen,domains nodes) is the following: > > > > S-DT ---[lopper]---> Linux DT > > L--> Zephyr DT ---[Zephyr build]---> Zephyr .h files > > > > S-DT contains all the needed information for both the regular Linux DT > > generation and also the Zephyr/RTOS/baremetal header files generation, > > that happens at build time. > > > > S-DT is not the same as the Xen device tree, but so far it has been > > conceptually and practically similar. I always imagine that the bindings > > we have in Xen we'll also have corresponding bindings in System Device > > Tree. > > > > For this workflow to work S-DT needs all the info so that both Linux DT > > and Zephyr DT and Zephyr .h files can be generated. > > > > Does this proposal contain enough information so that Zephyr .h files > > could be statically generated with the event channel numbers and static > > shared memory regions addresses? > > > > I am not sure. Maybe not? > > Yes it should be possible to have all infos as the integrator will setup the > system and will decide upfront the address and the event(s) number(s). > > > > > > > It is possible that the shared memory usage is so application specific > > that there is no point in even talking about it. But I think that > > introducing a simple bundle of both event channels and shared memory > > would help a lot. > > > > Something like the following in the Xen device tree would be enough to > > specify an arbitrary number of event channels connected with the same > > domains sharing the memory region. > > > > It looks like that if we did the below, we would carry a lot more useful > > information compared to the original proposal alone. We could add a > > similar xen,notificaiton property to the domU reserved-memory region in > > device tree generated by Xen for consistency, so that everything > > available to the domU is described fully in device tree. > > > > > > domU1 { > > compatible = "xen,domain"; > > > > /* one sub-node per local event channel */ > > ec1: evtchn@1 { > > compatible = "xen,evtchn-v1"; > > /* local-evtchn link-to-foreign-evtchn */ > > xen,evtchn = <0x1 &ec3> > > }; > > ec2: evtchn@2 { > > compatible = "xen,evtchn-v1"; > > xen,evtchn = <0x2 &ec4> > > }; > > /* > > * shared memory region between DomU1 and DomU2. > > */ > > domU1-shared-mem@50000000 { > > compatible = "xen,domain-shared-memory-v1"; > > xen,shm-id = <0x1>; > > xen,shared-mem = <0x50000000 0x20000000 0x60000000>; > > /* this is new */ > > xen,notification = <&ec1 &ec2>; > > } > > }; > > > > domU2 { > > compatible = "xen,domain"; > > > > /* one sub-node per local event channel */ > > ec3: evtchn@3 { > > compatible = "xen,evtchn-v1"; > > /* local-evtchn link-to-foreign-evtchn */ > > xen,evtchn = <0x3 &ec1> > > }; > > ec4: evtchn@4 { > > compatible = "xen,evtchn-v1"; > > xen,evtchn = <0x4 &ec2> > > }; > > /* > > * shared memory region between domU1 and domU2. > > */ > > domU2-shared-mem@50000000 { > > compatible = "xen,domain-shared-memory-v1"; > > xen,shm-id = <0x1>; > > xen,shared-mem = <0x50000000 0x20000000 0x70000000>; > > /* this is new */ > > xen,notification = <&ec3 &ec4>; > > } > > }; > > Few remarks/questions on this: > - this is not a shared memory anymore as you add a notification system to it > - what if someone wants to use only a shared memory, or an event, what should > xen do ? They still can. xen,notification would only be an optional property, not a mandatory property. So it is still possible to have shared memory without notifications (skip the xen,notification property), or event channels without shared memory (do not link the evtchn to xen,notification). > - in xen device tree, how do you associate the event with the shared memory ? I don't think I understand the question. The example above shows how to associate the event with the shared memory: the only additional thing needed (compared to proposal 2 already discussed) is the new optional property xen,notification. Xen itself wouldn't have to do anything special when xen,notification is specified, but would add a similar optional xen,notification property to the generated domU device tree. > > The good thing about this is that: > > > > - it is very flexible > > - nothing to do in this series, except switching to the > > one-subnode-per-evtchn model, which we called 2) in the previous email > > - there were good reasons to use the one-subnode-per-evtchn model anyway > > - the xen,notification property can be added later without issues, after > > Penny's series > > > > There are a couple of ways to implement the xen,notification property > > but we don't need to discuss them now. > > I think there is something to do here but we need a bit more discussion and > this can be done later. > Right now I am not quite sure we will not add something that will end up not > being used. Yes, I am not asking to add xen,notification now, neither to the Xen device tree or the domU device tree. I am only trying to make sure it would be possible do something like it without major changes to the existing device tree. And I think it is possible if we use proposal 2). > > Short Summary > > ------------ > > I think it is fine to only introduce the Xen device tree binding for > > static event channels without domU binding, but I prefer if we switched > > to using proposal 2) "one subnode per event channel". > > I will let Rahul answer on that.
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |