[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen ARM Dom0less passthrough without IOMMU


  • To: Stefano Stabellini <sstabellini@xxxxxxxxxx>
  • From: Andrei Cherechesu <andrei.cherechesu@xxxxxxx>
  • Date: Tue, 17 Dec 2019 17:20:32 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nxp.com; dmarc=pass action=none header.from=nxp.com; dkim=pass header.d=nxp.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=n3OAt+ZYvvjB69ao1cA1fnjonxY7GMaJQKeawdqv2Eg=; b=j5m883Pc7LMYncbOMyYNsb4I0ZMG0H9Ba0clHVfqIJtMSuM0U6oPLfHeCpe1mm47DhEBmnnU0EghGRvNKpRJP9B0/f1j4M8Bhos/p9mC31zFkObrBjnMw9yYG+HrlBQOyu3nryR2C+Iu9wx8SbnNh4E+yeknVmR8AT03H/jJlMklgtNvS1e3pUGzPhfVEdf4fN5QAoEuL/OllFzfhXEaOR/GzO5cNGfU3oyGgCSPyN2CXU/UKi7YteF0xu68lnTPI3pFoRYY7b4onJouIgXjiLIH1KYlLm0k0JBcR2dgvEeQDJsrViUaPugznnolcuIwe+FkmT6m2l85HnLfiZbu1A==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Xqe8mFTFCEYFyDzqvPs2ejox7eJIssEwN0m5LDAk4QdazqolWYawLlQ8bKlA2VVRMUEbchLhqZV7vangzgKCRq5D+FxGagzi8l4pGl4PCaMwm4fEe3178rtqrUeVCj6nDj9G7s5qCU8h7D97QqAb6Rz5cVPnJA/qUByGyfhzDTPRlcaqvJs15eDFS8NkuLBoaWwK89bW0rEGecOS/aciItZnX3jaIDKaQFgnF0nPqhKjI3F3UxpBd/3XHzP3VGnpVPotJ0hADnww9oIEgm2O5nIXqRQekh8d92IhhSm9y2OZj/jif9ho0Zio7hDd8IATq5ZDf+QHmLEIo/IXOyFKGg==
  • Authentication-results: spf=none (sender IP is ) smtp.mailfrom=andrei.cherechesu@xxxxxxx;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Julien Grall <julien@xxxxxxx>
  • Delivery-date: Tue, 17 Dec 2019 17:20:49 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AdW09/oPKVHYP3ZzQZqBxFP1VUu1vg==
  • Thread-topic: [Xen-devel] Xen ARM Dom0less passthrough without IOMMU

> On Mon, 16 Dec 2019, Julien Grall wrote:
> > On 16/12/2019 23:05, Stefano Stabellini wrote:
> > > On Mon, 16 Dec 2019, Julien Grall wrote:
> > > > On 16/12/2019 18:02, Andrei Cherechesu wrote:
> > > > But even with this patch, RAM in DomU is not direct mapped (i.e Guest
> > > > Physical
> > > > Address == Host Physical Address). This means that DMA-capable device
> > > > would
> > > > not work properly in DomU.
> > > >
> > > > We could theoritically map DomU direct mapped, but this would break the
> > > > isolation provided by the hypervisor.
> > >
> > > Yes, being able to map the DomU memory 1:1 can be pretty useful for some
> > > very embedded dom0less configurations, in fact I was surprised that a
> > > couple of Xilinx users asked me for that recently. Typically, the users
> > > are aware of the consequences but they still find them better than the
> > > alternative (i.e. the lack of isolation is bad but is tolerable in their
> > > configuration.)
> > This does not make much sense... The whole point of a hypervisor is to 
> > isolate
> > guest between each other... So if you are happy with the lack of isolation,
> > then why are you using an hypervisor at the first place?
>
> There are a number of reasons, although they are all variation of the
> same theme. In all these cases the IOMMU cannot be used for one reason
> or the other (a device is not behind the IOMMU, or due to an errata,
> etc.)
>
> - multiple baremetal apps
> The user wants to run two or more baremetal (unikernel-like)
> applications. The user owns both applications and she is not much
> concerned about isolation (although it is always desirable when
> possible.)
>
> - multiple OSes
> This is similar to the one before, however, instead of multiple
> baremetal apps, we are talking about multiple full OSes. For instance,
> Linux and Android or Linux and VxWorks. Again, they are both maintained
> by the same user (no multi-tenancy) so isolation is desirable but it is
> not the top concern.
>
> - real-time / no real-time
> The user wants to run a real-time OS or real-time baremetal app and a
> non real-time OS. For instance a tiny baremetal app controlling one
> specific device and Linux. Again, the user is responsible for both
> systems so isolation is not a concern.
>
> In all these cases the users has to run multiple OSes or baremetal apps
> so she needs a hypervisor. However, it is tolerable that the apps are
> not actually fully isolated from each others because they are both
> developed and deployed together by the same "owner".
>

Basically, since we do not have an IOMMU, we would be able
to ensure memory isolation via a NXP IP named xRDC (Extended
Resource Domain Controller) that our boards have, which supervises
the access to memory buses.

But before we get to think about isolation, we need to enable
basic passthrough functionality (via 1:1 mapping, since no IOMMU).

Firstly, a good step forward would be to get any non-DMA-capable
device passed-through and working. 
I rebased onto upstream/staging branch and applied the hack
that skips the setting of XEN_DOMCTL_CDF_iommu flag,
that Julien specified.

Then I tried to passthrough the eMMC, but I got the following
error:
(XEN) DOM1: [    0.879151] sdhci-esdhc-imx 4005d000.usdhc: can't request region 
for resource [mem 0x4005d000-0x4005dfff]
(XEN) DOM1: [    0.891137] sdhci-esdhc-imx 4005d000.usdhc: sdhci_pltfm_init 
failed -16
(XEN) DOM1: [    0.900249] sdhci-esdhc-imx: probe of 4005d000.usdhc failed with 
error -16

Where 0x4005d000 is the physical address of the uSDHC(eMMC) node in the DT.
It seems that the DomU1 kernel does not have access to that memory zone.

I'm trying to passthrough the eMMC in order to mount DomU1's root
on a SDCard partition, because I couldn't get to DomU1's Linux prompt
when I tried to boot with a ramdisk module. I always get this error:
(XEN) DOM1: [    1.544199] RAMDISK: Couldn't find valid RAM disk image starting 
at 0.

Could this be because the ramdisk is too big? The smallest I've tried with
Is approximately 60MB in size. What size are the ramdisks that you
are using in your dom0less booting demos?

> > >  From an implementation perspective, it should be a matter of calling
> > > allocate_memory_11 instead of allocate_memory from construct_domU. I
> > > wanted to experiment with it myself but I haven't had the time. If
> > > nothing else, it would be useful to have a patch around to do it if
> > > needed.
> > This is not that simple. You at least also need to:
> >     - Update the code to generate the DT based on the new 1:1 address
> >     - Modify the various emulation in Xen because they rely on Xen guest
> > memory layout
> >     - Modify is_domain_direct_mapped() to deal with guest
> >
> > I probably missed other bits. Anyway, this is not something I am willing to
> > accept upstream as this break the core idea of an hypervisor.
>
> If you prefer not to have it upstream, I would be happy to maintain it
> downstream in Xilinx/Xen or another tree, and take it as a contribution
> from Andrei if he volunteers to write and test the patch.
>
> Andrei, if you are going to write the patch, thanks in advance :-)
> Otherwise, I might get to it at some point but it might me a while.
>
> Cheers,
>
> Stefano

I'll gladly write the patch if you give me some basic
instructions regarding it, because I'm not that familiar with
all the Xen internal mechanisms, and I wouldn't know where
to look in order to ensure everything is properly done.

Thank you very much for your help,
Andrei


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.