[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] question about memory allocation for driver domain





On 07/02/2015 01:15, Oleksandr Tyshchenko wrote:
Hi Julien

Hi Oleksandr,

On Thu, Feb 5, 2015 at 6:36 PM, Oleksandr Tyshchenko
Let me describe in detail about solution #3 before answer to your
question. Maybe I missed something in
the first mail. Also the Ian's answer clarified to me some points.
We don't have complete solution for now. We only have temporally
solution in which
we relied on assumptions which might or not be acceptable, but it
seems that the approach in general could work.
If it is true the our target is to rewrite/rework this stuff, to make
it more cleaner and correct from XEN point of view for platforms which
doesn't have a SMMU
even in the case where this stuff never reaches upstream.

To run driver domain (domd) on OMAP5 platform with 1:1 mapping we did
next preparations:
(here I try to explain about memory allocation only, IRQ handling and
other things are out of scope for the current thread)
1. Since the domd can use 128/256/512 Mb of RAM we modified existing
populate_guest_memory() in xc_dom_arm.c to allow to allocate
128/256/512 Mb memory chunks.
2. Since the default rambase (0x40000000) doesn't suitable for us for
some reasons:
- OMAP5 platform has memory mapped registers, starting from 0x4800000
- We have 2 guest domains (domd and domU) so it should be different
rambases for them
we added ability for toolstack to pass it via domain config file.

While the overlapping is true on this Xen 4.5, we may decide to re-arrange the memory layout and put the GIC MMIOs on 0x4800000.

A more generic solution would be to re-use the memory layout of the host. So a 1:1 mapping for MMIO and RAM would avoid overlapping with possible "virtual" region.

I remembered to talk about it with Ian few months ago but we didn't have a practical use case at this time. FWIW, x86 has a similar solution via the e820_host param.

3. Since for domd we need one contiguous chunk of memory we created a
new function allocate_domd_memory_11() in common/memory.c for the next
purposes:
- To allocate one contiguous memory chunk with specified order (as it
was done for dom0)
- To add this allocated chunk to the guest - domd (via
guest_physmap_add_page(), taking into account that mfn=gpfn)
4. Since we need to allocate memory before any operation with it we
created hook for XENMEM_populate_physmap command. Here we relied on
the assumption
that the domain_id for domd is always 1 (it is not true after the domd
has been created again after destroying).
During first XENMEM_populate_physmap command we set is_privileged=true
and call allocate_domd_memory_11(),
during next commands - call populate_physmap() as a usual. The
is_domain_direct_mapped condition is a our case in populate_physmap().
I know that it is a very, very hacking solution). But, let's continue...

How it works at the moment?
1. Create domd with default rambase_pfn (0x80000000).
2. See what the mfn we got in allocate_domd_memory_11().
3. Set rambase_pfn=mfn in config file.
If the system configuration (N domains, domains memory, etc.) is not
changed, we will always get the same mfn. If we decided to change
something, for example, domd memory we need to repeat steps 2 and 3.
Yes, looks not good.

How it should works?
The approach is to tailor the domd address map to the contiguous
region the allocator gives us. So, a guest rambase (gpfn) must based
on
mfn of a page we have allocated successfully. Without any manual actions.

I think it must be done in next steps:
1. Add a separate command, XENMEM_alloc_memory_11 or something similar
and it should be issued before call xc_dom_rambase_init() in
libxl_dom.c in case of presence
domd_memory_11 property in config file only. This should remove
terrible hook and anything related to d->domain_id=1 in
common/memory.c.
2. Pass returned by XENMEM_alloc_memory_11 result to the xc_dom_rambase_init().

What are the advantages in compare with solution #1 and solution #2.
1. There is no need to add standalone allocator or modifying existing.

Let's return to your question about created/destroyed domd multiple times.
I have tried to do this with solution we have at the moment. I added
some modifications to allow me to destroy/create domd multiple times.
And I have seen that the allocator always returns the same page. This
means that the all memory allocated for domd have been returned to the
heap. Am I right?
Or perhaps did you mean that this may happens with the completed solution?

It seems logical to me that destroy/create domd in a row working fine. But this use-case is too simple :).

Let's imagine we decide to start classical domains (i.e no 1:1 mapping) before creating domd (the 1:1 domain). As the free memory may be sparsed, allocating one large RAM region may not work and therefore the domain allocation fail.

On a similar idea, the host RAM may be split on multiple non-contiguous banks. In this case, the RAM size of the 1:1 domain cannot be bigger than the size of the bank. You will never know which bank is used, as IIRC, the allocator behavior change between debug and non-debug build. We had the same issue on DOM0 before the support of multiple banks has been added. It sounds like you may want multiple bank support for an upstream use case.

The next problem is ballooning. When the guest balloon out memory, the page will be freed by Xen and can be re-used by another domain.

When guest balloon in, Xen will allocate a page (randomly) and therefore the mapping won't be anymore IPA (guest physical address) == PA (physical address). Any DMA request using this address will read/write data from wrong memory.

The last problem but not the least is, depending on which backend you are running in the 1:1 domain (such blkback), grant won't be mapped 1:1 to the guest, so you will have to use swiotlb in order to use the right DMA address. For instance, without swiotlb, guest won't be able to use a disk partition via blkfront. This because the backend is giving directly the grant address to the block driver. To solve this, we have to use swiotlb and set specific DMA callback. For now, there are only used for DOM0.

I think I've covered all the possible things you have to take care with 1:1 mapping. Let me know if you need more information on some of them.

Regards,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.