[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC v4]Proposal to allow setting up shared memory areas between VMs from xl config file



On Mon, 31 Jul 2017, Zhongze Liu wrote:
> I'm extremely sorry that I mistakenly copied and pasted an immediate
> version of the proposal here. As you might have already noticed, some
> of the content obviously conflicts with itself. Please see the new one below.
> And some typo's and indentation issues are also fixed. Sorry again for my
> mistake.
> 
> The revisited proposal:
> 
> ====================================================
> 1. Motivation and Description
> ====================================================
> Virtual machines use grant table hypercalls to setup a share page for
> inter-VMs communications. These hypercalls are used by all PV
> protocols today. However, very simple guests, such as baremetal
> applications, might not have the infrastructure to handle the grant table.
> This project is about setting up several shared memory areas for inter-VMs
> communications directly from the VM config file.
> So that the guest kernel doesn't have to have grant table support (in the
> embedded space, this is not unusual) to be able to communicate with
> other guests.
> 
> ====================================================
> 2. Implementation Plan:
> ====================================================
> 
> ======================================
> 2.1 Introduce a new VM config option in xl:
> ======================================
> 
> 2.1.1 Design Goals
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
> 
> The shared areas should be shareable among several (>=2) VMs, so every shared
> physical memory area is assigned to a set of VMs. Therefore, a “token” or
> “identifier” should be used here to uniquely identify a backing memory area.
> A string no longer than 128 bytes is used here to serve the purpose.
> 
> The backing area would be taken from one domain, which we will regard
> as the "master domain", and this domain should be created prior to any
> other "slave domain"s. Again, we have to use some kind of tag to tell who
> is the "master domain".
> 
> And the ability to specify the permissions and cacheability (and shareability
> for ARM guest's) of the pages to be shared should also be given to the user.
> 
> 2.2.2 Syntax and Behavior
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> The following example illustrates the syntax of the proposed config entry
> (suppose that we're on x86):
> 
> In xl config file of vm1:
>   static_shm = [ 'id=ID1, begin=0x100000, end=0x200000, role=master,    \
>                   cache_policy=x86_normal, prot=rw',                    \
>                                                                         \
>                  'id=ID2, begin=0x300000, end=0x400000, role=master' ]
> 
> In xl config file of vm2:
>   static_shm = [ 'id=ID1, offset = 0, begin=0x500000, end=0x600000,     \
>                   role=slave, prot=rw' ]
> 
> In xl config file of vm3:
>   static_shm = [ 'id=ID2, offset = 0x10000, begin=0x690000,             \
>                   end=0x800000, role=slave' ]
> 
> where:
>   @id              The identifier of the backing memory area.
>                    Can be any string that matches the regexp "[_a-zA-Z0-9]+"
>                    and no longer than 128 characters
> 
>   @offset          Can only appear when @role = slave. The sharing will
>                    start from the beginning of backing memory area plus
>                    this offset. If not set, it defaults to zero.
>                    Can be decimals or hexadecimals of the form "0x20000",
>                    and should be the multiple of the hypervisor page
>                    granularity (currently 4K on both ARM and x86).
> 
>   @begin/end       The boundaries of the shared memory area. The format
>                    requirements are the same with @offset.
> 
>   @role            Can only be 'master' or 'slave', it defaults to 'slave'.
> 
>   @prot            When @role = master, this means the largest set of
>                    stage-2 permission flags that can be granted to the
>                    slave domains.
>                    When @role = slave, this means the stage-2 permission
>                    flags of the shared memory area.
>                    Currently only 'rw' is supported. If not set. it
>                    defaults to 'rw'.
> 
>   @cache_policy    Can only appear when @role = master.
>                    The stage-2 cacheability/shareability attributes of the
>                    shared memory area. Currently, only two policies are
>                    supported:
>                      * ARM_normal: Only applicable to ARM guests. This
>                                    would mean Inner and Outer Write-Back
>                                    Cacheable, and Inner Shareable.
>                      * x86_normal: Only applicable to x86 HVM guests. This
>                                    would mean Write-Back Cacheable.
>                    If not set, it defaults to the *_normal policy for the
>                    corresponding platform.
> 
> Note:
>   The sizes of the areas specified by @begin and @end in the slave
>   domain's config file should be smaller than the corresponding sizes 
> specified
                                   ^ smaller or equal

You might want to repeat that @begin and @end must be a multiple of
the hypervisor page granularity (currently 4K on both ARM and x86).


>   in its master's domain. And @offset should always be within the backing
>   memory region. Overlapping backing memory areas are allowed, but the slave's

What do you mean by "overlapping backing memory areas"?


>   can't map two different backing memory region's into an overlapping memory
>   space. And each shared memory region ID can appear at most once in one
>   domain's xl config file.
>
>   The "master" role in vm1 for both ID1 and ID2 indicates that vm1 should be
>   created prior to both vm2 and vm3, for they both rely on the pages backed by
>   vm1. If one tries to create vm2 or vm3 prior to vm1, she will get an error.
> 
> In the example above. A memory area ID1 will be shared between vm1 and vm2.
> This area will be taken from vm1 and added to vm2's stage-2 page table.
> The parameter "prot=rw" means that this memory area is offered with read-write
> permission. vm1 can access this area using 0x100000~0x200000, and vm2 using
> 0x500000~0x600000. The stage-2 cache policy of this backing memory area is
> x86_normal.
> 
> Likewise, a memory area ID2 will be shared between vm1 and vm3 with read-write
> permissions. vm1 is the master and vm2 the slave. Note the @offset = 0x10000
> in vm2' config. The actual sharing relationship would be:
>    (vm1 : 0x310000~0x400000) <=====> (vm2 : 0x690000~0x800000)
> The stage-2 cache policy of this backing memory area is x86_normal.
> 
> ======================================
> 2.2 Store the mem-sharing information in xenstore
> ======================================
> For we don't have some persistent storage for xl to store the information
> of the shared memory areas, we have to find some way to keep it between xl
> launches. And xenstore is a good place to do this. The information for one
> shared area should include the ID, master's domid, address range,
> memory attributes and information of the slaves etc.
> A current plan is to place the information under /local/shared_mem/ID.
> Still take the above config files as an example:
> 
> If we instantiate vm1, vm2 and vm3, one after another, “xenstore ls -f” should
> output something like this:
> 
> After VM1 was instantiated, the output of “xenstore ls -f
> will be something like this:
> 
>     /local/shared_mem/ID1/master = domid_of_vm1
>     /local/shared_mem/ID1/begin = "0x100000"
>     /local/shared_mem/ID1/end = "0x200000"
>     /local/shared_mem/ID1/prot = "rw"
>     /local/shared_mem/ID1/cache_policy = "x86_normal"
>     /local/shared_mem/ID1/slaves = ""
> 
>     /local/shared_mem/ID2/master = domid_of_vm1
>     /local/shared_mem/ID2/begin = "0x300000"
>     /local/shared_mem/ID2/end = "0x400000"
>     /local/shared_mem/ID2/permissions = "rw"
>     /local/shared_mem/ID1/cache_policy = "x86_normal"
>     /local/shared_mem/ID2/slaves = ""
> 
> After VM2 was instantiated, the following new lines will appear:
> 
>     /local/shared_mem/ID1/slaves/domid_of_vm2/begin = "0x500000"
>     /local/shared_mem/ID1/slaves/domid_of_vm2/end = "0x600000"
>     /local/shared_mem/ID1/slaves/domid_of_vm2/offset = "0x0"
>     /local/shared_mem/ID1/slaves/domid_of_vm2/permissions = "rw"
> 
> After VM2 was instantiated, the following new lines will appear:
> 
>     /local/shared_mem/ID2/slaves/domid_of_vm3/begin = "0x690000"
>     /local/shared_mem/ID2/slaves/domid_of_vm3/end = "0x800000"
>     /local/shared_mem/ID1/slaves/domid_of_vm3/offset = "0x10000"
>     /local/shared_mem/ID2/slaves/domid_of_vm3/permissions = "rw"
> 
> 
> When we encounter an static_shm entry with id = IDx during "xl create":
> 
>   + If there's NO corresponding entry in xenstore:
>     + If @role=master, create the corresponding entries for IDx in xenstore
>     + If @role=role, say error.
           ^ do you mean role=slave?


>   + If the corresponding entry exists in xenstore:
>     + If @role=master, say error
>     + If @role=slave, map the pages to the newly created domain, and add the
>       neccesasry informations under /local/shared_mem/IDx/slaves.
         ^necessary


> ======================================
> 2.3 mapping the memory areas
> ======================================
> Handle the newly added config option in tools/{xl, libxl} and utilize
> toos/libxc to do the actual memory mapping. Specifically, we will use
> xc_domain_add_to_physmap_batch with XENMAPSPACE_gmfn_foreign to
> do the actual mapping.
> 
> ======================================
> 2.4 error handling
> ======================================
> Add code to handle various errors: Invalid address, invalid permissions, wrong
> order of vm creation, wrong length of memory area etc.
> 
> ====================================================
> 3. Expected Outcomes/Goals:
> ====================================================
> A new VM config option in xl will be introduced, allowing users to setup
> several shared memory areas for inter-VMs communications.
> This should work on both x86 and ARM.
> 
> ====================================================
> 3. Future Directions:
> ====================================================
> Implement the missing @prot flags and @cache_policy options.
> 
> Set up a notification channel between domains who are communicating through
> shared memory regions, this allows one vm to signal her friends when data is
> available in the shared memory or when the data in the shared memory is
> consumed. The channel could be built upon PPI or SGI.
> 
> 
> [See also:
> https://wiki.xenproject.org/wiki/Outreach_Program_Projects#Share_a_page_in_memory_from_the_VM_config_file]
> 
> Cheers,
> 
> Zhongze Liu
> 
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.