[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: Aligning Xen to physical memory maps on embedded systems


  • To: Levenglick Dov <Dov.Levenglick@xxxxxxxxxxxxxxxx>
  • From: Stefano Stabellini <stefano.stabellini@xxxxxxxxxx>
  • Date: Mon, 1 Mar 2021 17:42:04 -0800
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.62.198) smtp.rcpttodomain=elbitsystems.com smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=TB4SXKJMk7XGrsQms1b8tDBtBEFNN6Q0tiCadBhHxyQ=; b=lEptmUu6hoSdrSLKV+7h/oraGbiWqnTfimGufgUlnQ+T+vucE55jte6isr66m9FxPGbG66F9BBRSEhW0g+n07zzz0AOgLo4rCQQ9GVGWuEOKh2oXQiAt9RI4+eCiLm/5Y4Y6rVjY4c/OO2nD/UGtY+EZpczUmq//dc6FjVOaqmenQCWvY2bq04wDUIXfZ/JdBIV+ik4tHC2GEvbez/tVhDyjKOudVNnOK+5IkV5tjfcxHqhq+/7saXCUfsok/YnuWMBMF9GE2tb06t8NoTU5YVLMw6FqhstpUAy9yoonx0KsRQlaJHcIer/A4lltdCQYl7WaCxOMUPZKp1Sx9r/zEA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=bg11kBC7vuLgz34p5gsYZm0ipJ6Ct4YEhRxhO51lp3vCfiM/Er9z9CK/4vPmlo96sMJBDCeMIM6kn4LDc/7mRyo2XyrsRf46vpZhWo9/GkZK1OsbcQbNUjv+FPBFVbD/2q8VLj9xKeFEv3UcBoGXys/y+HHgOwCOjZMPIw3C1O5g+iIPGsyQjZ3tT2r7OIA8u+KNyI5K6rTccFXL1VgVMRZbecnD7U/0jqPHnimZuphM5SYf80t7Hi9q8KXznTnxyXPrhuT/LhitQ8AAa2UIXuhLqlUPHqqoZ+tfOuz1SXNXrR905T+Qcd5NLI4HIT7ON2bxR3k3nbm4utrrvmkWgA==
  • Cc: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, "Xen-users@xxxxxxxxxxxxxxxxxxxx" <Xen-users@xxxxxxxxxxxxxxxxxxxx>, Wei Chen <Wei.Chen@xxxxxxx>, Penny Zheng <Penny.Zheng@xxxxxxx>, Luca Fancellu <Luca.Fancellu@xxxxxxx>
  • Delivery-date: Tue, 02 Mar 2021 15:29:33 +0000
  • List-id: Xen user discussion <xen-users.lists.xenproject.org>

On Mon, 1 Mar 2021, Levenglick Dov wrote:
> > (+ Penny, Wei and Luca)
> > 
> > > On 23 Feb 2021, at 01:52, Stefano Stabellini <sstabellini@xxxxxxxxxx> 
> > > wrote:
> > >
> > > On Mon, 22 Feb 2021, Levenglick Dov wrote:
> > >>>> The system has 2GB of RAM (0x00000000 - 0x80000000) of which Xen
> > >>>> and the DomU have an allocation of 1.25GB, per this memory map:
> > >>>> 1. DomU1: 0x60000000 - 0x80000000
> > >>>> 2. DomU2: 0x40000000 - 0x60000000
> > >>>> 3. Xen: 0x30000000 - 0x40000000
> > >>>
> > >>> How did you tell Xen which regions is assigned to which guests? Are
> > >>> your domain mapped 1:1 (i.e guest physical address == host physical
> > address)?
> > >>
> > >> I am working on a solution where if the "xen,domain" memory has
> > >> #size-cell cells the content is backward compatible. But if it
> > >> contains (#address-cells + #size-cells), the address cells should be
> > considered the physical start address.
> > >> During the mapping of the entire address space insetup_mm(), the
> > >> carved out addresses would be added to the  reserved memory address
> > >> space. When the DomU is to be created, this physical space would be
> > >> mapped to it. The virtual addresses are less of an issue and needn't be
> > mapped 1x1 (although they could be).
> > >
> > > As of today neither upstream Xen nor the Xilinx Xen tree come with the
> > > feature of allowing the specification of an address range for dom0less
> > > guests.
> > >
> > > The only thing that Xilinx Xen allows, which is not upstream yet, is
> > > the ability of creating dom0less guests 1:1 mapped using the "direct-map"
> > > property. But the memory allocation is still done by Xen (you can't
> > > select the addresses).
> > >
> > > Some time ago I worked on a hacky prototype to allow the specification
> > > of address ranges, see:
> > >
> > > http://xenbits.xenproject.org/git-http/people/sstabellini/xen-unstable
> > > .git direct-map-2 from 7372466b21c3b6c96bb7a52754e432bac883a1e3
> > onward.
> > >
> > > In particular, have a look at "xen/arm: introduce 1:1 mapping for
> > > domUs". The work is not complete: it might not work depending on the
> > > memory ranges you select for your domUs. In particular, you can't
> > > select top-of-RAM addresses for your domUs. However, it might help you
> > > getting started.
> > >
> > >
> > >>>> I am able to support True Dom0-less by means of the patch/hack
> > >>>> demonstrated By Stefano Stabellini at
> > >>> https://youtu.be/UfiP9eAV0WA?t=1746.
> > >>>>
> > >>>> I was able to forcefully put the Xen binary at the address range
> > >>>> immediately below 0x40000000 by means of modifying
> > get_xen_paddr()
> > >>>> -
> > >>> in itself an ugly hack.
> > >>>>
> > >>>> My questions are:
> > >>>> 1. Since Xen performs runtime allocations from its heap, it is 
> > >>>> allocating
> > >>>>    downwards from 0x80000000 - thereby "stealing" memory from
> > DomU1.
> > >>>
> > >>> In theory, any memory reserved for domains should have been carved
> > >>> out from the heap allocator. This would be sufficient to prevent Xen
> > >>> allocating memory from the ranges you described above.
> > >>>
> > >>> Therefore, to me this looks like a bug in the tree you are using.
> > >>
> > >> This would be a better approach, but because Xen perform allocations
> > >> from its heap prior to allocating memory to DomU - and since it
> > >> allocates from the top of the heap - it is basically taking memory that I
> > wanted to set aside for the DomU.
> > >
> > > Yeah, this is the main problem that my prototype above couldn't solve.
> 
> Stephano: Is the approach that I previously described a feasible one?
>   1. Mark the addresses that I want to set aside as reserved
>   2. When reaching the proper DomU, map them and then use the mapping
> This approach would solve the heap issue

My first suggestion would be actually to let the hypervisor pick the
address ranges. If you don't change setup, you'll see that they are
actually stable across reboot. WARNING: Xen doesn't promise that they
are stable; however, in practice, they are stable unless you change
device tree or configuration or software versions.

That said, yes, I think your approach might work with some limitations
(e.g. Xen reclaiming memory on domU destruction but you probably don't
care about that). It could be a decent stopgap until we get a better
solution.

>From a Xen upstream point of view, it makes sense to follow the approach
used by Penny, Wei, and Betrand that seems to be the one that is more
flexible and integrate better with the existing codebase.



> > Wei and Penny are working on direct map and static allocation to fit
> > embedded use cases an might have more answer there.
> 
> Bertrand, Wei and Penny: Is there a "sneak preview"? I'd be happy to start 
> backporting to Xen 4.11

As mentioned, there is a 4.13-based Xilinx Xen tree available too.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.