[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 00/16] Xen ARM DomU ACPI support



----- julien.grall@xxxxxxx wrote:

> Hi Stefano,
> 
> On 14/09/2016 21:48, Stefano Stabellini wrote:
> > On Wed, 14 Sep 2016, Julien Grall wrote:
> >> On 14/09/2016 02:06, Stefano Stabellini wrote:
> >>> On Wed, 14 Sep 2016, Shannon Zhao wrote:
> >>>> On 2016/9/13 23:17, Julien Grall wrote:
> >>>>>
> >>>>>
> >>>>> On 13/09/16 14:06, Shannon Zhao wrote:
> >>>>>> Hi Julien,
> >>>>>
> >>>>> Hello Shannon,
> >>>>>
> >>>>>> On 2016/9/13 19:56, Julien Grall wrote:
> >>>>>>> Hi Shannon,
> >>>>>>>
> >>>>>>> On 02/09/16 03:55, Shannon Zhao wrote:
> >>>>>>>> From: Shannon Zhao <shannon.zhao@xxxxxxxxxx>
> >>>>>>>>
> >>>>>>>> The design of this feature is described as below.
> >>>>>>>> Firstly, the toolstack (libxl) generates the ACPI tables
> according
> >>>>>>>> the
> >>>>>>>> number of vcpus and gic controller.
> >>>>>>>>
> >>>>>>>> Then, it copies these ACPI tables to DomU non-RAM memory map
> space
> >>>>>>>> and
> >>>>>>>> passes them to UEFI firmware through the "ARM multiboot"
> protocol.
> >>>>>>>>
> >>>>>>>> At last, UEFI gets the ACPI tables through the "ARM
> multiboot"
> >>>>>>>> protocol
> >>>>>>>> and installs these tables like the usual way and passes both
> ACPI
> >>>>>>>> and DT
> >>>>>>>> information to the Xen DomU.
> >>>>>>>>
> >>>>>>>> Currently libxl only generates RSDP, XSDT, GTDT, MADT, FADT,
> DSDT
> >>>>>>>> tables
> >>>>>>>> since it's enough now.
> >>>>>>>>
> >>>>>>>> This has been tested using guest kernel with the Dom0 ACPI
> support
> >>>>>>>> patches which could be fetched from linux master or:
> >>>>>>>>
> https://git.kernel.org/cgit/linux/kernel/git/mfleming/efi.git/log/?h=efi/arm-xen
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> The UEFI binary could be fetched from or built from edk2
> master
> >>>>>>>> branch:
> >>>>>>>> http://people.linaro.org/~shannon.zhao/DomU_ACPI/XEN_EFI.fd
> >>>>>>>
> >>>>>>> On which commit this EFI binary is based? I am trying to
> rebuild
> >>>>>>> myself,
> >>>>>>> and go no luck to boot it so far.
> >>>>>>>
> >>>>>> I forgot the exact commit. But I just tried below commit which
> adds
> >>>>>> the
> >>>>>> support to edk2 and the guest can boot up successfully with
> ACPI.
> >>>>>>
> >>>>>> 402dde6 ArmVirtPkg/ArmVirtXen: Add ACPI support for Virt Xen
> ARM
> >>>>>
> >>>>> Thanks, the commit does not build on my platform. After some
> help for
> >>>>> Ard I managed to boot UEFI with the patch [1] applied.
> >>>>>
> >>>>> However Linux does not boot when passing acpi=on and abort with
> the
> >>>>> following message:
> >>>>>
> >>>>> (d86) 6RCU: Adjusting geometry for rcu_fanout_leaf=64,
> nr_cpu_ids=1
> >>>>> (d86) 6NR_IRQS:64 nr_irqs:64 0
> >>>>> (d86) 3No valid GICC entries exist
> >>>>> (d86) 0Kernel panic - not syncing: No interrupt controller
> found.
> >>>>> (d86) dCPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.8.0-rc6+ #420
> >>>>> (d86) dHardware name: XENVM-4.8 (DT)
> >>>>> (d86) Call trace:
> >>>>> (d86) [<ffff000008088708>] dump_backtrace+0x0/0x1a8
> >>>>> (d86) [<ffff0000080888c4>] show_stack+0x14/0x20
> >>>>> (d86) [<ffff0000083d6c2c>] dump_stack+0x94/0xb8
> >>>>> (d86) [<ffff00000815c24c>] panic+0x10c/0x250
> >>>>> (d86) [<ffff000008c223f8>] init_IRQ+0x24/0x2c
> >>>>> (d86) [<ffff000008c20a24>] start_kernel+0x238/0x394
> >>>>> (d86) [<ffff000008c201bc>] __primary_switched+0x30/0x74
> >>>>> (d86) 0---[ end Kernel panic - not syncing: No interrupt
> controller
> >>>>> found.
> >>>>>
> >>>>> This is because the header.length for GICC is not valid for ACPI
> 5.1
> >>>>> (see BAD_MADT_GICC_ENTRY). So please check all the size of each
> table
> >>>>> against ACPI 5.1.
> >>>>>
> >>>> Oops. The reason is that acpi_madt_generic_interrupt in Xen is
> already
> >>>> updated to ACPI 6.0 and the length is 80 not 76 of ACPI 5.1.
> >>>> One solution is that we still use ACPI 5.1 and make
> gicc->header.length
> >>>> 76. Other one is that we update to ACPI 6.0 since the Xen ARM
> ACPI
> >>>> support in Linux was introduced after ACPI 6.0.
> >>>>
> >>>> Which one do you prefer?
> >>>
> >>> Certainly the versions of all tables need to be consistent. I
> would
> >>> prefer to have ACPI 6.0 but 5.1 is acceptable too (especially if
> >>> upgrading to 6.0 causes a large amount of changes to your
> patches).
> >>
> >> I disagree on this, we should use the first version of ACPI that is
> fully
> >> supporting ARM because a guest operating system may choose to
> support the
> >> first one (there is a lot hardware platform out which only provides
> ACPI 5.1).
> >
> > And I thought that compatibility was supposed to be ACPI's strong
> suit.
> > I mistakenly had the impression that new ACPI releases weren't
> suppose
> > to break old OSes. I take back my comment, you are right that we
> should
> > stay on 5.1 (including all the erratas, many are important for ARM).
> >
> 
> IIRC, early version of ACPI used to have some incompatibility. I will
> have to go through the ACPI spec to find the main differences between
> 5.1 and 6.0 for ARM.


Transition from 1.x to 2.0 introduced incompatibilities (I believe in RSDP
structure definition) but I thought that since then they kept everything back
compatible.

-boris


> 
> Assuming the newer versions are backward compatible, it might be good
> to
> written down somewhere (maybe a public headers) that the guest OS
> should
> not assume a specific version of ACPI. This would avoid to tie us on
> ACPI 5.1 and allow us to upgrade the tables on a next release of Xen.
> 
> In any case, we should be consistent accross all the ACPI tables (e.g
> version, size of the tables...) to accommodate picky OSes. For now, I
> would stay on ACPI 5.1 for safety.
> 
> Regards,
> 
> --
> Julien Grall
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> https://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.