[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH RFC 00/20] Make ACPI builder available to components other than hvmloader
On 04/05/2016 09:25 PM, Boris Ostrovsky wrote: > This is an RFC for making hvmloader's ACPI builder available to both the > toolstack and the hypervisor, as discussed in > http://lists.xenproject.org/archives/html/xen-devel/2016-02/msg01228.html When do people think they will get a chance to comment on this? Should this wait until after 4.7 is released? Thanks. -boris > The series > * Removes dependency of today's builder on hvmloader interfaces > * Makes many of the tables built optionally. > * Moves tools/hvmloader/acpi directory to xen/common/libacpi > * Builds tables for PVHv2 domU guests in libxc > > There is still a number of questions about this implementation, thus it's > an RFC. Examples of things that need to be discussed are > > * ACPI tables are built for PVHv2 guests unconditionally. We probably want > to make this an option. > * Not sure about header files, especially xen/common/libacpi/x86.h and > tools/firmware/hvmloader/{stdio.h,string.h} > * The builder is compiled into the hypervisor even though currently > there are no users (PVHv2 dom0 will be the one) > * Patch 19 is somewhat of a spec violation > * Makefiles are questionable > * May need changes to guests' e820 map > > This is also available from > git://oss.oracle.com/git/bostrovs/xen.git:acpi_rfc. > > It has been tested with Linux PVHv2 (and I believe Roger tested an earlier > version with FreeBSD). No passthrough testing has been done. > > (I realize that many people are busy because of Friday's freeze but I figured > I'd > post it now in the hope that this may get some reading so that we can talk > about > it at the hackathon) > > > Boris Ostrovsky (20): > hvmloader: Provide hvmloader_acpi_build_tables() > acpi/hvmloader: Move acpi_info initialization out of ACPI code > acpi/hvmloader: Initialize vm_gid data outside ACPI code > acpi/hvmloader: Decide which SSDTs to build in hvmloader > acpi/hvmloader: Move passthrough initialization from ACPI code > acpi/hvmloader: Collect processor and NUMA info in hvmloader > acpi/hvmloader: Set TIS header address in hvmloader > acpi/hvmloader: Make providing IOAPIC in MADT optional > acpi/hvmloader: Build WAET optionally > acpi/hvmloader: Provide address of acpi_info as an argument to ACPI > code > acpi/hvmloader: Translate all addresses when assigning addresses in > ACPI tables > acpi/hvmloader: Link ACPI object files directly > acpi/hvmloader: Add stdio.h, string.h and x86.h > acpi/hvmloader: Replace mem_alloc() and virt_to_phys() with memory ops > acpi: Move ACPI code to xen/common/libacpi > x86/vlapic: Don't try to accept 8259 interrupt if !has_vpic() > x86: Allow LAPIC-only emulation_flags for HVM guests > libxc/acpi: Build ACPI tables for HVMlite guests > acpi: Set HW_REDUCED_ACPI in FADT if IOAPIC is not supported > acpi: Make ACPI builder available to hypervisor code > > .gitignore | 8 +- > tools/firmware/hvmloader/Makefile | 16 +- > tools/firmware/hvmloader/config.h | 13 +- > tools/firmware/hvmloader/hvmloader.c | 3 +- > tools/firmware/hvmloader/mp_tables.c | 1 + > tools/firmware/hvmloader/ovmf.c | 4 +- > tools/firmware/hvmloader/pci.c | 1 + > tools/firmware/hvmloader/pir.c | 1 + > tools/firmware/hvmloader/rombios.c | 4 +- > tools/firmware/hvmloader/seabios.c | 4 +- > tools/firmware/hvmloader/smbios.c | 1 + > tools/firmware/hvmloader/smp.c | 1 + > tools/firmware/hvmloader/stdio.h | 7 + > tools/firmware/hvmloader/string.h | 7 + > tools/firmware/hvmloader/util.c | 85 +++++ > tools/firmware/hvmloader/util.h | 6 +- > tools/firmware/rombios/32bit/Makefile | 2 +- > tools/firmware/rombios/32bit/tcgbios/Makefile | 2 +- > tools/firmware/rombios/32bit/util.h | 2 +- > tools/libxc/Makefile | 22 +- > tools/libxc/include/xc_dom.h | 1 + > tools/libxc/xc_acpi.c | 268 +++++++++++++ > tools/libxc/xc_dom_x86.c | 7 + > tools/libxl/libxl_x86.c | 19 +- > xen/arch/x86/domain.c | 26 +- > xen/arch/x86/hvm/vlapic.c | 3 + > xen/common/Makefile | 2 +- > .../hvmloader/acpi => xen/common/libacpi}/Makefile | 33 +- > .../hvmloader/acpi => xen/common/libacpi}/README | 0 > .../acpi => xen/common/libacpi}/acpi2_0.h | 66 +++- > .../hvmloader/acpi => xen/common/libacpi}/build.c | 415 > ++++++++++----------- > .../hvmloader/acpi => xen/common/libacpi}/dsdt.asl | 0 > xen/common/libacpi/dsdt_empty.asl | 22 ++ > .../acpi => xen/common/libacpi}/mk_dsdt.c | 4 + > .../acpi => xen/common/libacpi}/ssdt_pm.asl | 0 > .../acpi => xen/common/libacpi}/ssdt_s3.asl | 0 > .../acpi => xen/common/libacpi}/ssdt_s4.asl | 0 > .../acpi => xen/common/libacpi}/ssdt_tpm.asl | 0 > .../acpi => xen/common/libacpi}/static_tables.c | 1 - > xen/common/libacpi/x86.h | 14 + > 40 files changed, 794 insertions(+), 277 deletions(-) > create mode 100644 tools/firmware/hvmloader/stdio.h > create mode 100644 tools/firmware/hvmloader/string.h > create mode 100644 tools/libxc/xc_acpi.c > rename {tools/firmware/hvmloader/acpi => xen/common/libacpi}/Makefile (70%) > rename {tools/firmware/hvmloader/acpi => xen/common/libacpi}/README (100%) > rename {tools/firmware/hvmloader/acpi => xen/common/libacpi}/acpi2_0.h (84%) > rename {tools/firmware/hvmloader/acpi => xen/common/libacpi}/build.c (58%) > rename {tools/firmware/hvmloader/acpi => xen/common/libacpi}/dsdt.asl (100%) > create mode 100644 xen/common/libacpi/dsdt_empty.asl > rename {tools/firmware/hvmloader/acpi => xen/common/libacpi}/mk_dsdt.c (99%) > rename {tools/firmware/hvmloader/acpi => xen/common/libacpi}/ssdt_pm.asl > (100%) > rename {tools/firmware/hvmloader/acpi => xen/common/libacpi}/ssdt_s3.asl > (100%) > rename {tools/firmware/hvmloader/acpi => xen/common/libacpi}/ssdt_s4.asl > (100%) > rename {tools/firmware/hvmloader/acpi => xen/common/libacpi}/ssdt_tpm.asl > (100%) > rename {tools/firmware/hvmloader/acpi => xen/common/libacpi}/static_tables.c > (99%) > create mode 100644 xen/common/libacpi/x86.h > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |