[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file



On Wed, 21 Jun 2017, Zhongze Liu wrote:
> ====================================================
> 1. Motivation and Description
> ====================================================
> Virtual machines use grant table hypercalls to setup a share page for
> inter-VMs communications. These hypercalls are used by all PV
> protocols today. However, very simple guests, such as baremetal
> applications, might not have the infrastructure to handle the grant table.
> This project is about setting up several shared memory areas for inter-VMs
> communications directly from the VM config file.
> So that the guest kernel doesn't have to have grant table support (in the
> embedded space, this is not unusual) to be able to communicate with
> other guests.
> 
> ====================================================
> 2. Implementation Plan:
> ====================================================
> 
> ======================================
> 2.1 Introduce a new VM config option in xl:
> ======================================
> The shared areas should be shareable among several (>=2) VMs, so
> every shared physical memory area is assigned to a set of VMs.
> Therefore, a “token” or “identifier” should be used here to uniquely
> identify a backing memory area.
> 
> The backing area would be taken from one domain, which we will regard
> as the "master domain", and this domain should be created prior to any
> other "slave domain"s. Again, we have to use some kind of tag to tell who
> is the "master domain".
> 
> And the ability to specify the attributes of the pages (say, WO/RO/X)
> to be shared should be also given to the user. For the master domain,
> these attributes often describes the maximum permission allowed for the
> shared pages, and for the slave domains, these attributes are often used
> to describe with what permissions this area will be mapped.
> This information should also be specified in the xl config entry.
> 
> To handle all these, I would suggest using an unsigned integer to serve as the
> identifier, and using a "master" tag in the master domain's xl config entry
> to announce that she will provide the backing memory pages. A separate
> entry would be used to describe the attributes of the shared memory area, of
> the form "prot=RW".
> For example:
> 
> In xl config file of vm1:
> 
>     static_shared_mem = ["id = ID1, begin = gmfn1, end = gmfn2,
>                           granularity = 4k, prot = RO, master”,
>                          "id = ID2, begin = gmfn3, end = gmfn4,
>  granularity = 4k, prot = RW, master”]
> 
> In xl config file of vm2:
> 
>     static_shared_mem = ["id = ID1, begin = gmfn5, end = gmfn6,
>                           granularity = 4k, prot = RO”]
> 
> In xl config file of vm3:
> 
>     static_shared_mem = ["id = ID2, begin = gmfn7, end = gmfn8,
>                           granularity = 4k, prot = RW”]
> 
> gmfn's above are all hex of the form "0x20000".
> 
> In the example above. A memory area ID1 will be shared between vm1 and vm2.
> This area will be taken from vm1 and mapped into vm2's stage-2 page table.
> The parameter "prot=RO" means that this memory area are offered with read-only
> permission. vm1 can access this area using gmfn1~gmfn2, and vm2 using
> gmfn5~gmfn6.
> Likewise, a memory area ID will be shared between vm1 and vm3 with read and
> write permissions. vm1 is the master and vm2 the slave. vm1 can access the
> area using gmfn3~gmfn4 and vm3 using gmfn7~gmfn8.
> 
> The "granularity" is optional in the slaves' config entries. But if it's
> presented in the slaves' config entry, it has to be the same with its 
> master's.
> Besides, the size of the gmfn range must also match. And overlapping backing
> memory areas are well defined.
> 
> Note that the "master" tag in vm1 for both ID1 and ID2 indicates that vm1
> should be created prior to both vm2 and vm3, for they both rely on the pages
> backed by vm1. If one tries to create vm2 or vm3 prior to vm1, she will get
> an error. And in vm1's config file, the "prot=RO" parameter of ID1 indicates
> that if one tries to share this page with vm1 with, say, "WR" permission,
> she will get an error, too.
> 
> ======================================
> 2.2 Store the mem-sharing information in xenstore
> ======================================
> For we don't have some persistent storage for xl to store the information
> of the shared memory areas, we have to find some way to keep it between xl
> launches. And xenstore is a good place to do this. The information for one
> shared area should include the ID, master domid and gmfn ranges and
> memory attributes in master and slave domains of this area.
> A current plan is to place the information under /local/shared_mem/ID.
> Still take the above config files as an example:
> 
> If we instantiate vm1, vm2 and vm3, one after another,
> “xenstore ls -f” should output something like this:
> 
> After VM1 was instantiated, the output of “xenstore ls -f”
> will be something like this:
> 
>     /local/shared_mem/ID1/master = domid_of_vm1
>     /local/shared_mem/ID1/gmfn_begin = gmfn1
>     /local/shared_mem/ID1/gmfn_end = gmfn2
>     /local/shared_mem/ID1/granularity = "4k"
>     /local/shared_mem/ID1/permissions = "RO"
>     /local/shared_mem/ID1/slaves = ""
> 
>     /local/shared_mem/ID2/master = domid_of_vm1
>     /local/shared_mem/ID2/gmfn_begin = gmfn3
>     /local/shared_mem/ID2/gmfn_end = gmf4
>     /local/shared_mem/ID1/granularity = "4k"
>     /local/shared_mem/ID2/permissions = "RW"
>     /local/shared_mem/ID2/slaves = ""
> 
> After VM2 was instantiated, the following new lines will appear:
> 
>     /local/shared_mem/ID1/slaves/domid_of_vm2/gmfn_begin = gmfn5
>     /local/shared_mem/ID1/slaves/domid_of_vm2/gmfn_end = gmfn6
>     /local/shared_mem/ID1/slaves/domid_of_vm2/permissions = "RO"
> 
> After VM2 was instantiated, the following new lines will appear:
> 
>     /local/shared_mem/ID2/slaves/domid_of_vm3/gmfn_begin = gmfn7
>     /local/shared_mem/ID2/slaves/domid_of_vm3/gmfn_end = gmfn8
>     /local/shared_mem/ID2/slaves/domid_of_vm3/permissions = "RW"
> 
> 
> When we encounter an id IDx during "xl create":
> 
>   + If it’s not under /local/shared_mem:
>     + If the corresponding entry has a "master" tag, create the
>       corresponding entries for IDx in xenstore
>     + If there isn't a "master" tag, say error.
> 
>   + If it’s found under /local/shared_mem:
>     + If the corresponding entry has a "master" tag, say error
>     + If there isn't a "master" tag, map the pages to the newly
>       created domain, and add the current domain and necessary information
>       under /local/shared_mem/IDx/slaves.

Aside from using "gfn" instead of gmfn everywhere, I think it looks
pretty good.

I would leave out permissions and cacheability attributes from this
version of the work. I would just add a note saying that memory will be
mapped as RW regular cacheable RAM. Other permissions and cacheability
will be possible, but they are not implemented yet.

I think you should also add a few lines on how the teardown is supposed
to work at domain destruction, mentioning that the memory will be freed
only after all slaves and the master are destroyed. I would also clarify
who and when removes the /local/shared_mem xenstore entries.


> ======================================
> 2.3 mapping the memory areas
> ======================================
> Handle the newly added config option in tools/{xl, libxl} and utilize
> toos/libxc to do the actual memory mapping. Specifically, we will use
> a wrapper to XENMME_add_to_physmap_batch with XENMAPSPACE_gmfn_foreign to
> do the actual mapping. But since there isn't such a wrapper in libxc, we'll
> have to add a new wrapper, xc_domain_add_to_physmap_batch in libxc/xc_domain.c
> 
> ======================================
> 2.4 error handling
> ======================================
> Add code to handle various errors: Invalid address, invalid permissions, wrong
> order of vm creation, mismatched granulairty of length of memory area etc.
> 
> ====================================================
> 3. Expected Outcomes/Goals:
> ====================================================
> A new VM config option in xl will be introduced, allowing users to setup
> several shared memory areas for inter-VMs communications.
> This should work on both x86 and ARM.
> 
> ====================================================
> 3. Future Directions:
> ====================================================
> There could also be other non-permission memory attributes like cacheability
> and shareability.
> 
> Indications of where in the host physical memory should we get the backing
> memory from.
> 
> Set up a notification channel between domains who are communicating through
> shared memory regions, this allows one vm to signal her friends when data is
> available in the shared memory or when the data in the shared memory is
> consumed. The channel could be built upon PPI or SGI.
> 
> 
> [See also:
> https://wiki.xenproject.org/wiki/Outreach_Program_Projects#Share_a_page_in_memory_from_the_VM_config_file]
> 
> 
> Cheers,
> 
> Zhongze Liu
> 
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.