[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 07/11] pvh/ioreq: Install handlers for ACPI-related PVH IO accesses
> -----Original Message----- > From: Boris Ostrovsky [mailto:boris.ostrovsky@xxxxxxxxxx] > Sent: 09 November 2016 14:40 > To: xen-devel@xxxxxxxxxxxxx > Cc: jbeulich@xxxxxxxx; Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; > Wei Liu <wei.liu2@xxxxxxxxxx>; Ian Jackson <Ian.Jackson@xxxxxxxxxx>; Roger > Pau Monne <roger.pau@xxxxxxxxxx>; Boris Ostrovsky > <boris.ostrovsky@xxxxxxxxxx>; Paul Durrant <Paul.Durrant@xxxxxxxxxx> > Subject: [PATCH v2 07/11] pvh/ioreq: Install handlers for ACPI-related PVH IO > accesses > > PVH guests will have ACPI accesses emulated by the hypervisor > as opposed to QEMU (as is the case for HVM guests) > > Support for IOREQ server emulation of CPU hotplug is indicated > by XEN_X86_EMU_IOREQ_CPUHP flag. > > Logic for the handler will be provided by a later patch. > > Signed-off-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx> > --- > CC: Paul Durrant <paul.durrant@xxxxxxxxxx> > --- > Changes in v2: > * Introduce XEN_X86_EMU_IOREQ_CPUHP, don't set > HVM_PARAM_NR_IOREQ_SERVER_PAGES > for PVH guests Is 'CPUHP' the right name? The same GPE block could be used for DIMM and PCI hotplug, right? That aside... Reviewed-by: Paul Durrant <paul.durrant@xxxxxxxxxx> > * Register IO hadler for PVH from hvm_ioreq_init() > > xen/arch/x86/hvm/ioreq.c | 18 ++++++++++++++++++ > xen/include/asm-x86/domain.h | 2 ++ > xen/include/public/arch-x86/xen.h | 5 ++++- > 3 files changed, 24 insertions(+), 1 deletion(-) > > diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c > index d2245e2..e6ff48f 100644 > --- a/xen/arch/x86/hvm/ioreq.c > +++ b/xen/arch/x86/hvm/ioreq.c > @@ -1380,6 +1380,12 @@ static int hvm_access_cf8( > return X86EMUL_UNHANDLEABLE; > } > > +static int acpi_ioaccess( > + int dir, unsigned int port, unsigned int bytes, uint32_t *val) > +{ > + return X86EMUL_OKAY; > +} > + > void hvm_ioreq_init(struct domain *d) > { > spin_lock_init(&d->arch.hvm_domain.ioreq_server.lock); > @@ -1387,6 +1393,18 @@ void hvm_ioreq_init(struct domain *d) > > if ( !is_pvh_domain(d) ) > register_portio_handler(d, 0xcf8, 4, hvm_access_cf8); > + > + if ( !has_ioreq_cpuhp(d) ) > + { > + /* Online CPU map, see DSDT's PRST region. */ > + register_portio_handler(d, ACPI_CPU_MAP, > + ACPI_CPU_MAP_LEN, acpi_ioaccess); > + > + register_portio_handler(d, ACPI_GPE0_BLK_ADDRESS_V1, > + ACPI_GPE0_BLK_LEN_V1, acpi_ioaccess); > + register_portio_handler(d, ACPI_PM1A_EVT_BLK_ADDRESS_V1, > + ACPI_PM1A_EVT_BLK_LEN, acpi_ioaccess); > + } > } > > /* > diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h > index a279e4a..7aa736c 100644 > --- a/xen/include/asm-x86/domain.h > +++ b/xen/include/asm-x86/domain.h > @@ -431,6 +431,8 @@ struct arch_domain > #define has_vvga(d) (!!((d)->arch.emulation_flags & > XEN_X86_EMU_VGA)) > #define has_viommu(d) (!!((d)->arch.emulation_flags & > XEN_X86_EMU_IOMMU)) > #define has_vpit(d) (!!((d)->arch.emulation_flags & > XEN_X86_EMU_PIT)) > +#define has_ioreq_cpuhp(d) (!!((d)->arch.emulation_flags & \ > + XEN_X86_EMU_IOREQ_CPUHP)) > > #define has_arch_pdevs(d) (!list_empty(&(d)->arch.pdev_list)) > > diff --git a/xen/include/public/arch-x86/xen.h b/xen/include/public/arch- > x86/xen.h > index cdd93c1..350bc66 100644 > --- a/xen/include/public/arch-x86/xen.h > +++ b/xen/include/public/arch-x86/xen.h > @@ -283,12 +283,15 @@ struct xen_arch_domainconfig { > #define XEN_X86_EMU_IOMMU (1U<<_XEN_X86_EMU_IOMMU) > #define _XEN_X86_EMU_PIT 8 > #define XEN_X86_EMU_PIT (1U<<_XEN_X86_EMU_PIT) > +#define _XEN_X86_EMU_IOREQ_CPUHP 9 > +#define XEN_X86_EMU_IOREQ_CPUHP > (1U<<_XEN_X86_EMU_IOREQ_CPUHP) > > #define XEN_X86_EMU_ALL (XEN_X86_EMU_LAPIC | > XEN_X86_EMU_HPET | \ > XEN_X86_EMU_PM | XEN_X86_EMU_RTC | > \ > XEN_X86_EMU_IOAPIC | XEN_X86_EMU_PIC | > \ > XEN_X86_EMU_VGA | XEN_X86_EMU_IOMMU | > \ > - XEN_X86_EMU_PIT) > + XEN_X86_EMU_PIT | > \ > + XEN_X86_EMU_IOREQ_CPUHP) > uint32_t emulation_flags; > }; > #endif > -- > 2.7.4 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |