[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Proposal to allow setting up shared memory areas between VMs from xl config file



On Mon, May 22, 2017 at 02:14:41PM -0700, Stefano Stabellini wrote:
> On Mon, 22 May 2017, Ian Jackson wrote:
> > Stefano Stabellini writes ("Re: Proposal to allow setting up shared memory 
> > areas between VMs from xl config file"):
> > > In this scenario, she is going to write to the VM config files of the
> > > two apps that one page will be shared among the two, so that they can
> > > send each other messages. She will hard-code the address of the shared
> > > page in her "bare-metal" app.
> > 
> > Thanks.  This makes some sense.
> > 
> > How do these apps expect to interrupt each other, or do they poll in
> > the shared memory ?  What I'm getting at with this question is that
> > perhaps some event channels will need setting up.
> 
> As a matter of fact, I have been asking myself the same question. Nobody
> asked me explicitly for notifications support, so I didn't include it in
> the original project definition (it can always be added later) but I
> think it would be useful.
> 
> Edgar, Jarvis, do you have an opinion on this? Do (software) interrupts
> need to be setup together with the shared memory region to send
> notifications back and forth between the two guests, or are they
> unnecessary because the apps do polling anyway?

Hi Stefano,

Sorry, I haven't been following this thread in detail.

The requests I've heard of so far involve:

1. Static setup of memory/pages at a given guest physical address.
   Preferably by allowing the setup to control the real physical
   address aswell (e.g, to select on chip memories as the backing).
   Bonus for an option to let Xen dynamically allocate the physical
   memory.

   Preferably avoiding the need for hypercalls and such as the guest
   my be an unmodified program that runs natively (without Xen).

2. Interrupts would be done by means of IPIs like if running natively
   on HW. Either by some dedicated IP device or by using GIC PPIs/SGIs
   to raise interrupts on other cores. PPI's are a bit akward as
   it conflicts with the Xen model of multi-core intra-guest IPIs,
   as oppposed to inter-guest IPIs. SGI's accross guests could work.


> 
> Event channels are not as complex as grants, but they are not trivial
> either. Guests need to support the full event channel ABI even just to
> receive notifications from one event channel only (because they need to
> clear the pending bits), which is not simple and increases latency to
> de-multiplex events. See drivers/xen/events/events_2l.c and
> drivers/xen/events/events_base.c in Linux. I think we would have to
> introduce a simpler model, where each "notification channel" is not
> implemented by an event channel, but by a PPI or SGI instead. We expect
> only one or two to be used. PPIs and SGIs are interrupt classes on ARM,
> it is possible to allocate one or more for notification usage.

Yes, I wrote too fast, you're getting to the same point here...



> 
> I think it is probably best to leave notifications to the future.

Perhaps yes.

In the ZynqMP case, as a first step, we can use the dedicated IPI blocks.
It would simply involve mapping irq's and memory regions to the various guests
and they would be able to raise interrupts to each other by memory
writes to the IPI devices. Xen doesn't need to be involved more.
This should already work today.

Cheers,
Edgar


> 
> 
> > > There is no frontend and backend (as in the usual Xen meaning). In some
> > > configurations one app might be more critical than the other, but in
> > > some other scenarios they might have the same criticality.
> > 
> > Yes.
> > 
> > > If, as Jan pointed out, we need to call out explicitly which is the
> > > frontend and which is the backend for page ownership reasons, then I
> > > suggested we expose that configuration to the user, so that she can
> > > choose.
> > 
> > Indeed.
> > 
> > ISTM that this scenario doesn't depend on new hypervisor
> > functionality.  The toolstack could set up the appropriate page
> > sharing (presumably, this would be done with grants so that the result
> > is like something the guests could have done themselves.)
> 
> Right, I don't think we need new hypervisor functionalities. I don't
> have an opinion on whether it should be done with grants or with other
> hypercalls, although I have the feeling that it might be more difficult
> to achieve with grants. As long as it works... :-)
> 
> 
> > I see no objection to the libxl domain configuration file naming
> > guest-physical addresses for use in this way.
> >
> > One problem, though, is to do with startup order.  To do this in the
> > most natural way, one would want to start both guests at once so that
> > one would know their domids etc.  (Also that avoids questions like
> > `what if one of them crashes'...)
> > 
> > I'm not sure exactly how to fit this into the libxl model, which
> > mostly talks about one guest domain at a time; and each guest config
> > talks about persistent resources, rather than resources which are
> > somehow exposed by a particular guest.
> > 
> > I think this question is worth exploring to see what shape the right
> > solution is.
> 
> You are right that it would make sense to start both domains together
> but, to avoid confusion, I would stick with one config file per VM. I
> would still require the user to issue "xl create" twice to start the two
> guests.
> 
> If we demand the user to specify the domain that provides the memory,
> then we establish a startup order naturally: the user needs to create
> the memory sharing domain first, and the memory mapping domain second.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.