[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [v2][PATCH 8/8] xen, gfx passthrough: add opregion mapping
> -----Original Message----- > From: Stefano Stabellini [mailto:stefano.stabellini@xxxxxxxxxxxxx] > Sent: Monday, May 19, 2014 7:54 PM > To: Chen, Tiejun > Cc: anthony.perard@xxxxxxxxxx; stefano.stabellini@xxxxxxxxxxxxx; > mst@xxxxxxxxxx; Kelly.Zytaruk@xxxxxxx; qemu-devel@xxxxxxxxxx; > xen-devel@xxxxxxxxxxxxxxxxxxx; peter.maydell@xxxxxxxxxx; > anthony@xxxxxxxxxxxxx; weidong.han@xxxxxxxxx; Kay, Allen M; > jean.guyader@xxxxxxxxxxxxx; Zhang, Yang Z > Subject: Re: [v2][PATCH 8/8] xen, gfx passthrough: add opregion mapping > > On Fri, 16 May 2014, Tiejun Chen wrote: > > The OpRegion shouldn't be mapped 1:1 because the address in the host > > can't be used in the guest directly. > > > > This patch traps read and write access to the opregion of the Intel > > GPU config space (offset 0xfc). > > > > The original patch is from Jean Guyader <jean.guyader@xxxxxxxxxxxxx> > > > > Signed-off-by: Yang Zhang <yang.z.zhang@xxxxxxxxx> > > Signed-off-by: Tiejun Chen <tiejun.chen@xxxxxxxxx> > > Cc: Jean Guyader <jean.guyader@xxxxxxxxxxxxx> > > --- > > v2: > > > > * We should return zero as an invalid address value while calling > > igd_read_opregion(). > > > > hw/xen/xen_pt.h | 4 +++- > > hw/xen/xen_pt_config_init.c | 45 > ++++++++++++++++++++++++++++++++++++++++++- > > hw/xen/xen_pt_graphics.c | 47 > +++++++++++++++++++++++++++++++++++++++++++++ > > 3 files changed, 94 insertions(+), 2 deletions(-) > > > > diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h index 507165c..25147cf > > 100644 > > --- a/hw/xen/xen_pt.h > > +++ b/hw/xen/xen_pt.h > > @@ -63,7 +63,7 @@ typedef int (*xen_pt_conf_byte_read) #define > > XEN_PT_BAR_UNMAPPED (-1) > > > > #define PCI_CAP_MAX 48 > > - > > +#define PCI_INTEL_OPREGION 0xfc > > > > typedef enum { > > XEN_PT_GRP_TYPE_HARDWIRED = 0, /* 0 Hardwired reg group */ > @@ > > -306,5 +306,7 @@ int pci_create_pch(PCIBus *bus); void > > igd_pci_write(PCIDevice *pci_dev, uint32_t config_addr, > > uint32_t val, int len); uint32_t > > igd_pci_read(PCIDevice *pci_dev, uint32_t config_addr, int len); > > +uint32_t igd_read_opregion(XenPCIPassthroughState *s); void > > +igd_write_opregion(XenPCIPassthroughState *s, uint32_t val); > > > > #endif /* !XEN_PT_H */ > > diff --git a/hw/xen/xen_pt_config_init.c b/hw/xen/xen_pt_config_init.c > > index de9a20f..cf36a40 100644 > > --- a/hw/xen/xen_pt_config_init.c > > +++ b/hw/xen/xen_pt_config_init.c > > @@ -575,6 +575,22 @@ static int > xen_pt_exp_rom_bar_reg_write(XenPCIPassthroughState *s, > > return 0; > > } > > > > +static int xen_pt_intel_opregion_read(XenPCIPassthroughState *s, > > + XenPTReg *cfg_entry, > > + uint32_t *value, uint32_t > > +valid_mask) { > > + *value = igd_read_opregion(s); > > + return 0; > > +} > > + > > +static int xen_pt_intel_opregion_write(XenPCIPassthroughState *s, > > + XenPTReg *cfg_entry, > uint32_t *value, > > + uint32_t dev_value, > uint32_t > > +valid_mask) { > > + igd_write_opregion(s, *value); > > + return 0; > > +} > > + > > /* Header Type0 reg static information table */ static XenPTRegInfo > > xen_pt_emu_reg_header0[] = { > > /* Vendor ID reg */ > > @@ -1440,6 +1456,20 @@ static XenPTRegInfo xen_pt_emu_reg_msix[] = { > > }, > > }; > > > > +static XenPTRegInfo xen_pt_emu_reg_igd_opregion[] = { > > + /* Intel IGFX OpRegion reg */ > > + { > > + .offset = 0x0, > > + .size = 4, > > + .init_val = 0, > > + .no_wb = 1, > > + .u.dw.read = xen_pt_intel_opregion_read, > > + .u.dw.write = xen_pt_intel_opregion_write, > > + }, > > + { > > + .size = 0, > > + }, > > +}; > > > > /**************************** > > * Capabilities > > @@ -1677,6 +1707,14 @@ static const XenPTRegGroupInfo > xen_pt_emu_reg_grps[] = { > > .size_init = xen_pt_msix_size_init, > > .emu_regs = xen_pt_emu_reg_msix, > > }, > > + /* Intel IGD Opregion group */ > > + { > > + .grp_id = PCI_INTEL_OPREGION, > > + .grp_type = XEN_PT_GRP_TYPE_EMU, > > + .grp_size = 0x4, > > + .size_init = xen_pt_reg_grp_size_init, > > + .emu_regs = xen_pt_emu_reg_igd_opregion, > > + }, > > { > > .grp_size = 0, > > }, > > If I am not mistaken, in the original patch to qemu-xen-traditional, we were > not > adding an Intel IGD Opregion group. Instead we were changing the size of the > header Type0 reg group: > > +static uint8_t pt_reg_grp_header0_size_init(struct pt_dev *ptdev, > + struct pt_reg_grp_info_tbl *grp_reg, uint32_t base_offset) { > + /* > + ** By default we will trap up to 0x40 in the cfg space. > + ** If an intel device is pass through we need to trap 0xfc, > + ** therefore the size should be 0xff. > + */ > + if (igd_passthru) > + return 0xFF; > + return grp_reg->grp_size; > +} > > Here instead we are adding the new group and forcing the offset to be > PCI_INTEL_OPREGION, below. It looks functionally equivalent, but nicer. > > But wouldn't it be even better to have find_cap_offset return the right offset > for Intel IGD Opregion group? Why do we need to manually set reg_grp_offset > to PCI_INTEL_OPREGION? > > > > > @@ -1806,7 +1844,8 @@ int xen_pt_config_init(XenPCIPassthroughState > *s) > > uint32_t reg_grp_offset = 0; > > XenPTRegGroup *reg_grp_entry = NULL; > > > > - if (xen_pt_emu_reg_grps[i].grp_id != 0xFF) { > > + if (xen_pt_emu_reg_grps[i].grp_id != 0xFF > > + && xen_pt_emu_reg_grps[i].grp_id != > PCI_INTEL_OPREGION) { Actually in this emulated case we still call find_cap_offset() to get reg_grp_offset. > > if (xen_pt_hide_dev_cap(&s->real_device, > > > xen_pt_emu_reg_grps[i].grp_id)) { > > continue; > > @@ -1819,6 +1858,10 @@ int xen_pt_config_init(XenPCIPassthroughState > *s) > > } > > } > > > > + if (xen_pt_emu_reg_grps[i].grp_id == PCI_INTEL_OPREGION) { > > + reg_grp_offset = PCI_INTEL_OPREGION; > > + } > > + But for this pass through scenario, we have to set 0xfc manually since we need to trap 0xfc completely in that comment: + /* + ** By default we will trap up to 0x40 in the cfg space. + ** If an intel device is pass through we need to trap 0xfc, + ** therefore the size should be 0xff. + */ Thanks Tiejun _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |