[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Does XEN on ARM support allocation for physical continuous memory on DOM0/U?

On Thu, 2015-10-22 at 07:38 +0000, Tom Ting[äéå] wrote:
> > -----Original Message-----
> > From: Julien Grall [mailto:julien.grall@xxxxxxxxxx] 
> > Sent: Tuesday, October 20, 2015 9:21 PM
> > To: Tom Ting[äéå]; Ian Campbell; xen-users@xxxxxxxxxxxxx
> > Subject: Re: [Xen-users] Does XEN on ARM support allocation for
> > physical continuous memory on DOM0/U?
> > 
> > (Deleted)
> > 
> > Having the memory physically contiguous is not enough. You also have to
> > map the physical region at the same base address in the guest to be
> > able to program the DMA correctly.
> > 
> > Although, beware that device passthrough is not safe without IOMMU.
> > Your Android guest will be able to issue DMA request to any part of the
> > RAM and could interfere with DomU_Function.
> > 
> > If you care about security, I would advice you to look at implementing
> > new PV drivers.
> > 
> > > As you mentioned, if we want to solve the memory-contiguous problem
> > > on DomU, we have to hack through the domain-memory-allocation path
> > > right?
> > > Could you tell us where is the location of DomU memory construction
> > > flow that we could do some modification?
> > 
> > FIY, a thread has been started few months ago with some idea how to
> > support 1:1 mapping for guest (see [1]).
> > 
> > In outline, you first have to modify the memory allocation in the
> > toolstack (see populate_guest_memory in tools/libxc/xc_dom_arm.c) to
> > handle the contiguous size you want to support.
> > 
> > You then need to find a way to get the physical address of the block
> > from the hypervisor and write them in the guest DT. This also means the
> > guest layout which is currently hardcoded (see
> > xen/include/public/arch-arm.h) needs to be modified for your purpose.
> > 
> > At that point, you may hit some problem with the Xen allocator because
> > nothing guarantee that you will be able to find enough space to
> > allocate the memory contiguously.
> > 
> > > Really thanks for your help.
> > 
> > Regards,
> > 
> > [1]
> > http://lists.xenproject.org/archives/html/xen-devel/2015-02/msg00570.ht
> > ml
> > 
> > --
> > Julien Grall
> Thanks Julien
> We decieded to create back/front-end drivers to solve contiguous-memory
> problem on Dom0.
> Currently checking through the grant-table / front-backend driver /
> xenstore stuffs.
> But I am wondering that will memory operation become bottleneck when
> using grant-table to transfer data since double of the memory operation
> will be needed (please correct me if I am wrong).
>       EX : 1. DomU copys data to Grant Page; 2. Dom0 Copys data from
> Grant Page to target memroy.
> Or, probably I could reserve a contiguous-memory (probably tens of MB),
> then use grant-table share these pages to DomU.
> Then DomU could take these granted pages as contiguous memory?Or is there
> any better advice or probably the overhead could simply been ignored.

Since you haven't specified the kind of devices in use it is rather hard to
say anything specific. Some general points about PV protocols:

It is normal to design a PV interface such that it doesn't require
contiguous memory.

It is normally the frontend (domU) which allocates the memory (from its own
pool) and then grants access to it to the backend (dom0).

You can usually avoid at least one potential copy operation by using the
grant map rather than grant copy mechanisms. If the backend (dom0) maps the
(necessarily non-contiguous) frontend (domU) memory then you end up with a
single copy between that mapping and dom0 owned 1:1 memory suitable for DMA
(either in or out). If the physical driver correctly uses the existing DMA
interfaces in the kernel then for the data going from dom0->device then
this copy should happen automatically and transparently via a bounce buffer
in the swiotlb only when needed.


Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.