[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: xen/arm: attaching block devices under EFI



On Thu, 20 Oct 2022, Julien Grall wrote:
> (I forgot to add Stefano...)
> 
> On 08/10/2022 18:57, Benjamin Mordaunt wrote:
> > On Sat Oct 8, 2022 at 6:55 PM BST, Benjamin Mordaunt wrote:
> > > Following my previous chat with Julien, I'm assuming the flow:
> > > 
> > > U-Boot -> Xen -> EFI (for guest) -> GRUB -> Ubuntu
> > > 
> > > is not really possible - there is no chain of trust for secure boot,
> > > and EFI information from the underlying firmware is lost (i.e. what EFI
> > > information would Xen present to the guest's GRUB?)


Hi Ben,

First, let me tell you about two recent Xen developments that might be
relevant to secure boot (assuming your interested in secure boot and
chain of trust).

As you probably know, with Xen Dom0less (docs/features/dom0less.pandoc,
docs/misc/arm/device-tree/booting.txt) it is possible to boot multiple
VMs in parallel directly from Xen. The kernels and ramdisks are loaded
by U-Boot.

It is possible to put all the boot binaries including:
- xen hypervisor binary
- dom0 kernel & ramdisk
- all dom0less domUs kernels & ramdisks & passthrough configurations
in a single U-Boot FIT image, and then sign the FIT image and verify the
signature at boot. ImageBuilder has support for it already:

https://gitlab.com/xen-project/imagebuilder

See "FIT", "FIT_ENC_KEY_DIR" and "FIT_ENC_UB_DTB". This solution has
very similar security properties to secure boot because all the boot
binaries are signed and the signature is verified by U-Boot.


Another recent development that you might find interesting is that we
added support to QEMU so that it can emulate selected devices for Xen
VMs on ARM:

https://marc.info/?l=qemu-devel&m=166581066020967

We did that specifically to emulate a TPM device and make it available
to guests. We did test running Linux accessing the TPM device. Vikram
(CCed) might have more info for you if you want to set it up.



> > > So I'm now investigating a full EFI+arm stack, but some things are still
> > > not clear. I'm following the information presented in [1], but can't see
> > > how you dedicate block devices to a particular domain, like you can with
> > > a standard xl.cfg configuration. Let's take a DomU DT entry from [1] as
> > > an example:
> > > 
> > > domU1 {
> > >      #size-cells = <0x1>;
> > >      #address-cells = <0x1>;
> > >      compatible = "xen,domain";
> > >      cpus = <0x1>;
> > >      memory = <0x0 0xc0000>;
> > >      vpl011;
> > > 
> > >      module@1 {
> > >          compatible = "multiboot,kernel", "multiboot,module";
> > >   xen,uefi-binary = "Image-domu1.bin";
> > >   bootargs = "console=ttyAMA0 root=/dev/ram0 rw":
> > >      };
> > > };
> > > 
> > > So, what if I have a Linux image in some filesystem image somewhere, (I
> > > imagine in the Dom0 rootfs or more ideally in an LVM volume) that
> > > contains an EFI GRUB2 image that I want to boot into? I see no reference
> > > to a "disk" option, as you would write into a traditional Xen config
> > > file?
> > > 
> > > How do I "sandbox" guests to only see the disks that they are assigned?
> > > 
> > > Basically, how do I configure disks at all?!

If you want to expose a partition or LVM volume to a VM, you need to
share the SATA controller, or MMC controller among multiple guests,
which is where PV drivers come in. It is not possible to configure PV
drivers from the dom0less device tree boot configuration.

However it is possible to "hotplug" PV devices after boot. So you could
boot dom0 plus 2 additional domUs in parallel with dom0less, then you
can call:

  xl block-attach

and hotplug a partition/disk into one domU and another one into the other
domU. This works well and it has been tested with Linux. All the code is
already upstream.

However, if you need to access the disk from the early boot stages in
your guest, then this doesn't work for you.

If you take a step back, looking at the big picture, if you need a
virtual block device very early at boot (e.g. EFI firmare running inside
a VM), and the virtual block device is provided by a backend in dom0,
then you need to wait for dom0 to be fully booted. There is no advantage
in using dom0less in your setup. You might as well use "xl create".

Unless the block device is a physical block device that can be directly
assigned to your domU. In that case you can have the domU access the MMC
controller directly and that will work fine early at boot without having
to wait for dom0.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.