[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH for-4.13 v3] x86/passthrough: fix migration of MSI when using posted interrupts
This patch synced PIRR with IRR when misx table updated, I ran same test over 1.5 hours and did not reproduced it, without the patch, I could reproduced within 10 minutes. Tested-by: Joe Jin <joe.jin@xxxxxxxxxx> Thanks, Joe On 11/8/19 5:34 AM, Roger Pau Monne wrote: > When using posted interrupts and the guest migrates MSI from vCPUs Xen > needs to flush any pending PIRR vectors on the previous vCPU, or else > those vectors could get wrongly injected at a later point when the MSI > fields are already updated. > > Rename sync_pir_to_irr to vlapic_sync_pir_to_irr and export it so it > can be called when updating the binding of physical interrupts to > guests. > > Note that PIRR is synced to IRR both in pt_irq_destroy_bind and > pt_irq_create_bind when the interrupt delivery data is being updated. > > Also store the vCPU ID in multi-destination mode when using posted > interrupts so that the interrupt is always injected to a known vCPU in > order to be able to flush the PIRR when modifying the binding. > > Reported-by: Joe Jin <joe.jin@xxxxxxxxxx> > Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx> > --- > Cc: Joe Jin <joe.jin@xxxxxxxxxx> > Cc: Juergen Gross <jgross@xxxxxxxx> > --- > I would like to see a bug fix for this issue in 4.13. The fix here only > affects posted interrupts, hence I think the risk of breaking anything > else is low. > --- > Changes since v2: > - Also sync PIRR with IRR when using CPU posted interrupts. > - Force the selection of a specific vCPU when using posted interrupts > for multi-dest. > - Change vmsi_deliver_pirq to honor dest_vcpu_id. > > Changes since v1: > - Store the vcpu id also in multi-dest mode if the interrupt is bound > to a vcpu for posted delivery. > - s/#if/#ifdef/. > --- > xen/arch/x86/hvm/vlapic.c | 6 +++--- > xen/arch/x86/hvm/vmsi.c | 11 ++++++++++- > xen/drivers/passthrough/io.c | 29 +++++++++++++++++++++++------ > xen/include/asm-x86/hvm/vlapic.h | 2 ++ > 4 files changed, 38 insertions(+), 10 deletions(-) > > diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c > index 9466258d6f..d255ad8db7 100644 > --- a/xen/arch/x86/hvm/vlapic.c > +++ b/xen/arch/x86/hvm/vlapic.c > @@ -106,7 +106,7 @@ static void vlapic_clear_irr(int vector, struct vlapic > *vlapic) > vlapic_clear_vector(vector, &vlapic->regs->data[APIC_IRR]); > } > > -static void sync_pir_to_irr(struct vcpu *v) > +void vlapic_sync_pir_to_irr(struct vcpu *v) > { > if ( hvm_funcs.sync_pir_to_irr ) > alternative_vcall(hvm_funcs.sync_pir_to_irr, v); > @@ -114,7 +114,7 @@ static void sync_pir_to_irr(struct vcpu *v) > > static int vlapic_find_highest_irr(struct vlapic *vlapic) > { > - sync_pir_to_irr(vlapic_vcpu(vlapic)); > + vlapic_sync_pir_to_irr(vlapic_vcpu(vlapic)); > > return vlapic_find_highest_vector(&vlapic->regs->data[APIC_IRR]); > } > @@ -1493,7 +1493,7 @@ static int lapic_save_regs(struct vcpu *v, > hvm_domain_context_t *h) > if ( !has_vlapic(v->domain) ) > return 0; > > - sync_pir_to_irr(v); > + vlapic_sync_pir_to_irr(v); > > return hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, vcpu_vlapic(v)->regs); > } > diff --git a/xen/arch/x86/hvm/vmsi.c b/xen/arch/x86/hvm/vmsi.c > index 6597d9f719..fe488ccc7d 100644 > --- a/xen/arch/x86/hvm/vmsi.c > +++ b/xen/arch/x86/hvm/vmsi.c > @@ -118,7 +118,16 @@ void vmsi_deliver_pirq(struct domain *d, const struct > hvm_pirq_dpci *pirq_dpci) > > ASSERT(pirq_dpci->flags & HVM_IRQ_DPCI_GUEST_MSI); > > - vmsi_deliver(d, vector, dest, dest_mode, delivery_mode, trig_mode); > + if ( hvm_funcs.deliver_posted_intr && pirq_dpci->gmsi.dest_vcpu_id != -1 > ) > + /* > + * When using posted interrupts multi-destination delivery mode is > + * forced to select a specific vCPU so that the PIRR can be synced > into > + * IRR when the interrupt is destroyed or moved. > + */ > + vmsi_inj_irq(vcpu_vlapic(d->vcpu[pirq_dpci->gmsi.dest_vcpu_id]), > + vector, trig_mode, delivery_mode); > + else > + vmsi_deliver(d, vector, dest, dest_mode, delivery_mode, trig_mode); > } > > /* Return value, -1 : multi-dests, non-negative value: dest_vcpu_id */ > diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c > index b292e79382..d3f1ae5c39 100644 > --- a/xen/drivers/passthrough/io.c > +++ b/xen/drivers/passthrough/io.c > @@ -341,7 +341,7 @@ int pt_irq_create_bind( > { > uint8_t dest, delivery_mode; > bool dest_mode; > - int dest_vcpu_id; > + int dest_vcpu_id, prev_vcpu_id = -1; > const struct vcpu *vcpu; > uint32_t gflags = pt_irq_bind->u.msi.gflags & > ~XEN_DOMCTL_VMSI_X86_UNMASKED; > @@ -411,6 +411,7 @@ int pt_irq_create_bind( > > pirq_dpci->gmsi.gvec = pt_irq_bind->u.msi.gvec; > pirq_dpci->gmsi.gflags = gflags; > + prev_vcpu_id = pirq_dpci->gmsi.dest_vcpu_id; > } > } > /* Calculate dest_vcpu_id for MSI-type pirq migration. */ > @@ -426,14 +427,24 @@ int pt_irq_create_bind( > > pirq_dpci->gmsi.posted = false; > vcpu = (dest_vcpu_id >= 0) ? d->vcpu[dest_vcpu_id] : NULL; > - if ( iommu_intpost ) > + if ( hvm_funcs.deliver_posted_intr && delivery_mode == > dest_LowestPrio ) > { > - if ( delivery_mode == dest_LowestPrio ) > - vcpu = vector_hashing_dest(d, dest, dest_mode, > - pirq_dpci->gmsi.gvec); > + /* > + * NB: when using posted interrupts the vector is signaled > + * on the PIRR, and hence Xen needs to force interrupts to be > + * delivered to a specific vCPU in order to be able to sync PIRR > + * with IRR when the interrupt binding is destroyed, or else > + * pending interrupts in the previous vCPU PIRR field could be > + * delivered after the update. > + */ > + vcpu = vector_hashing_dest(d, dest, dest_mode, > + pirq_dpci->gmsi.gvec); > if ( vcpu ) > - pirq_dpci->gmsi.posted = true; > + pirq_dpci->gmsi.dest_vcpu_id = vcpu->vcpu_id; > } > + if ( iommu_intpost && vcpu ) > + pirq_dpci->gmsi.posted = true; > + > if ( vcpu && is_iommu_enabled(d) ) > hvm_migrate_pirq(pirq_dpci, vcpu); > > @@ -442,6 +453,9 @@ int pt_irq_create_bind( > pi_update_irte(vcpu ? &vcpu->arch.hvm.vmx.pi_desc : NULL, > info, pirq_dpci->gmsi.gvec); > > + if ( hvm_funcs.deliver_posted_intr && prev_vcpu_id >= 0 ) > + vlapic_sync_pir_to_irr(d->vcpu[prev_vcpu_id]); > + > if ( pt_irq_bind->u.msi.gflags & XEN_DOMCTL_VMSI_X86_UNMASKED ) > { > unsigned long flags; > @@ -731,6 +745,9 @@ int pt_irq_destroy_bind( > else if ( pirq_dpci && pirq_dpci->gmsi.posted ) > pi_update_irte(NULL, pirq, 0); > > + if ( hvm_funcs.deliver_posted_intr && pirq_dpci->gmsi.dest_vcpu_id >= 0 ) > + vlapic_sync_pir_to_irr(d->vcpu[pirq_dpci->gmsi.dest_vcpu_id]); > + > if ( pirq_dpci && (pirq_dpci->flags & HVM_IRQ_DPCI_MAPPED) && > list_empty(&pirq_dpci->digl_list) ) > { > diff --git a/xen/include/asm-x86/hvm/vlapic.h > b/xen/include/asm-x86/hvm/vlapic.h > index dde66b4f0f..b0017d1dae 100644 > --- a/xen/include/asm-x86/hvm/vlapic.h > +++ b/xen/include/asm-x86/hvm/vlapic.h > @@ -150,4 +150,6 @@ bool_t vlapic_match_dest( > const struct vlapic *target, const struct vlapic *source, > int short_hand, uint32_t dest, bool_t dest_mode); > > +void vlapic_sync_pir_to_irr(struct vcpu *v); > + > #endif /* __ASM_X86_HVM_VLAPIC_H__ */ > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |