|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC v1 11/15] vmx: Add a global wake-up vector for VT-d Posted-Interrupts
> -----Original Message-----
> From: Tian, Kevin
> Sent: Thursday, April 02, 2015 2:01 PM
> To: Wu, Feng; xen-devel@xxxxxxxxxxxxx
> Cc: JBeulich@xxxxxxxx; keir@xxxxxxx; Zhang, Yang Z
> Subject: RE: [RFC v1 11/15] vmx: Add a global wake-up vector for VT-d
> Posted-Interrupts
>
> > From: Wu, Feng
> > Sent: Wednesday, March 25, 2015 8:32 PM
> >
> > This patch adds a global vector which is used to wake up
> > the blocked vCPU when an interrupt is being posted to it.
> >
> > Signed-off-by: Feng Wu <feng.wu@xxxxxxxxx>
> > Suggested-by: Yang Zhang <yang.z.zhang@xxxxxxxxx>
> > ---
> > xen/arch/x86/hvm/vmx/vmx.c | 33
> > +++++++++++++++++++++++++++++++++
> > xen/include/asm-x86/hvm/hvm.h | 1 +
> > xen/include/asm-x86/hvm/vmx/vmx.h | 3 +++
> > xen/include/xen/sched.h | 2 ++
> > 4 files changed, 39 insertions(+)
> >
> > diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> > index ff5544d..b2b4c26 100644
> > --- a/xen/arch/x86/hvm/vmx/vmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vmx.c
> > @@ -89,6 +89,7 @@ DEFINE_PER_CPU(struct list_head,
> > blocked_vcpu_on_cpu);
> > DEFINE_PER_CPU(spinlock_t, blocked_vcpu_on_cpu_lock);
> >
> > uint8_t __read_mostly posted_intr_vector;
> > +uint8_t __read_mostly pi_wakeup_vector;
> >
> > static int vmx_domain_initialise(struct domain *d)
> > {
> > @@ -131,6 +132,8 @@ static int vmx_vcpu_initialise(struct vcpu *v)
> > if ( v->vcpu_id == 0 )
> > v->arch.user_regs.eax = 1;
> >
> > + INIT_LIST_HEAD(&v->blocked_vcpu_list);
> > +
> > return 0;
> > }
> >
> > @@ -1834,11 +1837,19 @@ const struct hvm_function_table * __init
> > start_vmx(void)
> > }
> >
> > if ( cpu_has_vmx_posted_intr_processing )
> > + {
> > alloc_direct_apic_vector(&posted_intr_vector,
> > event_check_interrupt);
> > +
> > + if ( iommu_intpost )
> > + alloc_direct_apic_vector(&pi_wakeup_vector,
> > pi_wakeup_interrupt);
> > + else
> > + vmx_function_table.pi_desc_update = NULL;
> > + }
>
> just style issue. Above conditional logic looks not intuitive to me.
> usually we have:
> if ( iommu_intpost )
> vmx_function_table.pi_desc_update = func;
> else
> vmx_function_table.pi_desc_update = NULL;
>
> suppose you will register callback in later patch. then better to
> move the NULL one there too. Putting it here doesn't meet the
> normal if...else implications. :-)
You suggestion is good. Here is my idea about this code fragment:
Here is the place to register notification event handle, so it is better
to register the wakeup event handle for VT-d PI here as well. Just like other
members in vmx_function_table, such as, deliver_posted_intr, sync_pir_to_irr,
pi_desc_update is initialed to 'vmx_pi_desc_update' in the definition of
vmx_function_table statically. So do you have any ideas to make this
gracefully?
Thanks,
Feng
>
> > else
> > {
> > vmx_function_table.deliver_posted_intr = NULL;
> > vmx_function_table.sync_pir_to_irr = NULL;
> > + vmx_function_table.pi_desc_update = NULL;
> > }
> >
> > if ( cpu_has_vmx_ept
> > @@ -3255,6 +3266,28 @@ void vmx_vmenter_helper(const struct
> > cpu_user_regs *regs)
> > }
> >
> > /*
> > + * Handle VT-d posted-interrupt when VCPU is blocked.
> > + */
> > +void pi_wakeup_interrupt(struct cpu_user_regs *regs)
> > +{
> > + struct vcpu *v;
> > + int cpu = smp_processor_id();
> > +
> > + spin_lock(&per_cpu(blocked_vcpu_on_cpu_lock, cpu));
> > + list_for_each_entry(v, &per_cpu(blocked_vcpu_on_cpu, cpu),
> > + blocked_vcpu_list) {
> > + struct pi_desc *pi_desc = &v->arch.hvm_vmx.pi_desc;
> > +
> > + if ( pi_test_on(pi_desc) == 1 )
> > + tasklet_schedule(&v->vcpu_wakeup_tasklet);
>
> why can't we directly call vcpu_unblock here?
Please see the following scenario if we use vcpu_unblock directly here:
pi_wakeup_interrupt() (blocked_vcpu_on_cpu_lock is required) --> vcpu_unblock()
-->
vcpu_wake() --> vcpu_runstate_change() --> vmx_ pi_desc_update() (In this
function we
may need to require blocked_vcpu_on_cpu_lock, this will cause dead lock.)
Thanks,
Feng
>
> > + }
> > + spin_unlock(&per_cpu(blocked_vcpu_on_cpu_lock, cpu));
> > +
> > + ack_APIC_irq();
> > + this_cpu(irq_count)++;
> > +}
> > +
> > +/*
> > * Local variables:
> > * mode: C
> > * c-file-style: "BSD"
> > diff --git a/xen/include/asm-x86/hvm/hvm.h
> > b/xen/include/asm-x86/hvm/hvm.h
> > index 0dc909b..a11a256 100644
> > --- a/xen/include/asm-x86/hvm/hvm.h
> > +++ b/xen/include/asm-x86/hvm/hvm.h
> > @@ -195,6 +195,7 @@ struct hvm_function_table {
> > void (*deliver_posted_intr)(struct vcpu *v, u8 vector);
> > void (*sync_pir_to_irr)(struct vcpu *v);
> > void (*handle_eoi)(u8 vector);
> > + void (*pi_desc_update)(struct vcpu *v, int new_state);
> >
> > /*Walk nested p2m */
> > int (*nhvm_hap_walk_L1_p2m)(struct vcpu *v, paddr_t L2_gpa,
> > diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h
> > b/xen/include/asm-x86/hvm/vmx/vmx.h
> > index e643c3c..f4296ab 100644
> > --- a/xen/include/asm-x86/hvm/vmx/vmx.h
> > +++ b/xen/include/asm-x86/hvm/vmx/vmx.h
> > @@ -34,6 +34,7 @@ DECLARE_PER_CPU(struct list_head,
> > blocked_vcpu_on_cpu);
> > DECLARE_PER_CPU(spinlock_t, blocked_vcpu_on_cpu_lock);
> >
> > extern uint8_t posted_intr_vector;
> > +extern uint8_t pi_wakeup_vector;
> >
> > typedef union {
> > struct {
> > @@ -574,6 +575,8 @@ int alloc_p2m_hap_data(struct p2m_domain *p2m);
> > void free_p2m_hap_data(struct p2m_domain *p2m);
> > void p2m_init_hap_data(struct p2m_domain *p2m);
> >
> > +void pi_wakeup_interrupt(struct cpu_user_regs *regs);
> > +
> > /* EPT violation qualifications definitions */
> > #define _EPT_READ_VIOLATION 0
> > #define EPT_READ_VIOLATION (1UL<<_EPT_READ_VIOLATION)
> > diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> > index c874dd4..91f0912 100644
> > --- a/xen/include/xen/sched.h
> > +++ b/xen/include/xen/sched.h
> > @@ -148,6 +148,8 @@ struct vcpu
> >
> > struct vcpu *next_in_list;
> >
> > + struct list_head blocked_vcpu_list;
> > +
> > s_time_t periodic_period;
> > s_time_t periodic_last_event;
> > struct timer periodic_timer;
> > --
> > 2.1.0
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |