[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file



On Tue, 18 Jul 2017, Zhongze Liu wrote:
> Hi Julien,
> 
> After our discussion during the summit, I have revised my plan, but
> I'm still working on it and haven't sent it to the ML yet.
> I'm planning to send a new version of my proposal together with the
> parsing code later so that I could reference the
> proposal in the commit message.
> But here is what's related to our discussion about the granularity in
> my current draft:
> 
>   @granularity          can be a number with an optional unit: k, m,
> kb or mb,
>                                  the final result should be a multiple of 4k.
> 
> The actual address of begin/end will then be calculated by multiplying them
> with @granularity. For example, if begin=0x100 and granularity=4k then the
> shared space will begin at the address 0x100000.

I would remove "granularity" from the interface and just use full
addresses for begin and end (or begin and size).

 
> Cheers,
> 
> Zhongze Liu
> 
> 2017-07-18 20:10 GMT+08:00 Julien Grall <julien.grall@xxxxxxx>:
> > Hi,
> >
> >
> > On 20/06/17 18:18, Zhongze Liu wrote:
> >>
> >> ====================================================
> >> 1. Motivation and Description
> >> ====================================================
> >> Virtual machines use grant table hypercalls to setup a share page for
> >> inter-VMs communications. These hypercalls are used by all PV
> >> protocols today. However, very simple guests, such as baremetal
> >> applications, might not have the infrastructure to handle the grant table.
> >> This project is about setting up several shared memory areas for inter-VMs
> >> communications directly from the VM config file.
> >> So that the guest kernel doesn't have to have grant table support (in the
> >> embedded space, this is not unusual) to be able to communicate with
> >> other guests.
> >>
> >> ====================================================
> >> 2. Implementation Plan:
> >> ====================================================
> >>
> >> ======================================
> >> 2.1 Introduce a new VM config option in xl:
> >> ======================================
> >> The shared areas should be shareable among several (>=2) VMs, so
> >> every shared physical memory area is assigned to a set of VMs.
> >> Therefore, a “token” or “identifier” should be used here to uniquely
> >> identify a backing memory area.
> >>
> >> The backing area would be taken from one domain, which we will regard
> >> as the "master domain", and this domain should be created prior to any
> >> other "slave domain"s. Again, we have to use some kind of tag to tell who
> >> is the "master domain".
> >>
> >> And the ability to specify the attributes of the pages (say, WO/RO/X)
> >> to be shared should be also given to the user. For the master domain,
> >> these attributes often describes the maximum permission allowed for the
> >> shared pages, and for the slave domains, these attributes are often used
> >> to describe with what permissions this area will be mapped.
> >> This information should also be specified in the xl config entry.
> >>
> >> To handle all these, I would suggest using an unsigned integer to serve as
> >> the
> >> identifier, and using a "master" tag in the master domain's xl config
> >> entry
> >> to announce that she will provide the backing memory pages. A separate
> >> entry would be used to describe the attributes of the shared memory area,
> >> of
> >> the form "prot=RW".
> >> For example:
> >>
> >> In xl config file of vm1:
> >>
> >>     static_shared_mem = ["id = ID1, begin = gmfn1, end = gmfn2,
> >>                           granularity = 4k, prot = RO, master”,
> >>                          "id = ID2, begin = gmfn3, end = gmfn4,
> >>  granularity = 4k, prot = RW, master”]
> >
> >
> > Replying here regarding the discussion we had during the summit. AArch64 is
> > supporting multiple page granularities (4KB, 16KB, 64KB).
> >
> > Each guest and the Hypervisor are free to use different page granularity. To
> > go further, if I am not mistaken, an OS is free to use different page
> > granularity on each processor.
> >
> > In reality, I have only seen OS using the same granularity across all the
> > processors.
> >
> > At the moment, Xen is only supporting 4KB page granularity. Although, there
> > are plan to also support 64KB because this is the only way to support above
> > 48-bit physical address.
> >
> > With that in mind, this interface is a bit confusing. What does the
> > "granularity" refers to? Hypervisor? Guest A? Guest B?
> >
> > Similarly, gmfn* are frames. But what is its granularity?
> >
> > I think it would make sense to start using the full address on the toolstack
> > side, avoiding confusion for the user what is the page granularity to be
> > used here.
> >
> > Cheers,
> >
> > --
> > Julien Grall
> 
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.