[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] question about memory allocation for driver domain



Hi Ian

On Mon, Feb 9, 2015 at 12:53 PM, Ian Campbell <ian.campbell@xxxxxxxxxx> wrote:
> On Mon, 2015-02-09 at 16:31 +0800, Julien Grall wrote:
>> It seems logical to me that destroy/create domd in a row working fine.
>> But this use-case is too simple :).
>>
>> Let's imagine we decide to start classical domains (i.e no 1:1 mapping)
>> before creating domd (the 1:1 domain). As the free memory may be
>> sparsed, allocating one large RAM region may not work and therefore the
>> domain allocation fail.
>>
>> On a similar idea, the host RAM may be split on multiple non-contiguous
>> banks. In this case, the RAM size of the 1:1 domain cannot be bigger
>> than the size of the bank. You will never know which bank is used, as
>> IIRC, the allocator behavior change between debug and non-debug build.
>> We had the same issue on DOM0 before the support of multiple banks has
>> been added. It sounds like you may want multiple bank support for an
>> upstream use case.
>
> It seems to me that any use of 1:1 memory for !dom0 needs to be from a
> preallocated region which is allocated for this purpose at boot and then
> reserved for this specific allocation.
>
> e.g. lets imagine a hypervisor option mem_11_reserve=256M,256M,128M
> which would, at boot time, allocate 2x 256M contiguous regions and
> 1x128M one. When building a guest some mechanism (new hypercall, some
> other other trickery etc) indicates that the guest being built is
> supposed to use one of these regions instead of the usual domheap
> allocator.
>
> This would allow for a boot time configurable number of 1:1 regions. I
> think this would work for the embedded use case since the domains which
> have these special properties are well defined in size and number and so
> can be allocated up front.
Sounds reasonable.

>
>> The next problem is ballooning. When the guest balloon out memory, the
>> page will be freed by Xen and can be re-used by another domain.
>
> I think we need to do as we do for 1:1 dom0 here and not hand back the
> memory on decrease reservation, but instead punch a hole in the p2m but
> keep the mfn in reserve.
>
> IOW ballooning is not supported for such domains (we only go as far as
> punching the hole to allow for the other usecase of ballooning which is
> to make a p2m hole for the Xen backend driver to use for grant maps)
>
>> The last problem but not the least is, depending on which backend you
>> are running in the 1:1 domain (such blkback), grant won't be mapped 1:1
>> to the guest, so you will have to use swiotlb in order to use the right
>> DMA address. For instance, without swiotlb, guest won't be able to use a
>> disk partition via blkfront. This because the backend is giving directly
>> the grant address to the block driver. To solve this, we have to use
>> swiotlb and set specific DMA callback. For now, there are only used for
>> DOM0.
>
> Not much we can do here except extend the dom0 code here to
> conditionally enable itself for other domains.
Agree. There are no problem with swiotlb and xen_dma_ops. We already
have working backends in domd: fb, vkbd, sound.

>
> Ian.
>



-- 

Oleksandr Tyshchenko | Embedded Dev
GlobalLogic
www.globallogic.com

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.