[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v6 14/18] vmx: posted-interrupt handling when vCPU is blocked
> -----Original Message----- > From: Jan Beulich [mailto:JBeulich@xxxxxxxx] > Sent: Monday, September 07, 2015 7:48 PM > To: Wu, Feng > Cc: Andrew Cooper; Tian, Kevin; xen-devel@xxxxxxxxxxxxx; Keir Fraser > Subject: Re: [PATCH v6 14/18] vmx: posted-interrupt handling when vCPU is > blocked > > >>> On 25.08.15 at 03:57, <feng.wu@xxxxxxxxx> wrote: > > --- a/xen/arch/x86/hvm/vmx/vmcs.c > > +++ b/xen/arch/x86/hvm/vmx/vmcs.c > > @@ -661,6 +661,9 @@ int vmx_cpu_up(void) > > if ( cpu_has_vmx_vpid ) > > vpid_sync_all(); > > > > + INIT_LIST_HEAD(&per_cpu(pi_blocked_vcpu, cpu)); > > + spin_lock_init(&per_cpu(pi_blocked_vcpu_lock, cpu)); > > This initialization appears to be the sole reason why the respective > variables can't be static (in vmx.c). I'd really like to ask for this to > be adjusted, even if this means adding another call out of > vmx_cpu_up() into vmx.c. > > > @@ -106,6 +114,9 @@ static int vmx_vcpu_initialise(struct vcpu *v) > > > > spin_lock_init(&v->arch.hvm_vmx.vmcs_lock); > > > > + INIT_LIST_HEAD(&v->arch.hvm_vmx.pi_blocked_vcpu_list); > > + INIT_LIST_HEAD(&v->arch.hvm_vmx.pi_vcpu_on_set_list); > > Are these really needed (these aren't list heads afaict)? I can remove the initialization operations. > > > @@ -1976,6 +1987,54 @@ static struct hvm_function_table __initdata > vmx_function_table = { > > .altp2m_vcpu_emulate_vmfunc = vmx_vcpu_emulate_vmfunc, > > }; > > > > +/* > > + * Handle VT-d posted-interrupt when VCPU is blocked. > > + */ > > This is (and hence should be formatted as) a single line comment. > > > +static void pi_wakeup_interrupt(struct cpu_user_regs *regs) > > +{ > > + struct arch_vmx_struct *vmx, *tmp; > > + struct vcpu *v; > > + spinlock_t *lock = &this_cpu(pi_blocked_vcpu_lock); > > + struct list_head *blocked_vcpus = &this_cpu(pi_blocked_vcpu); > > At least on possibly performance relevant paths please avoid > multiple back-to-back uses of this_cpu() (use per_cpu() instead). Thanks for the suggestion, would you mind share more information about why per_cpu() is preferable in this case? Thanks! > > > + LIST_HEAD(list); > > + > > + spin_lock(lock); > > + > > + /* > > + * XXX: The length of the list depends on how many vCPU is current > > + * blocked on this specific pCPU. This may hurt the interrupt latency > > + * if the list grows to too many entries. > > + */ > > + list_for_each_entry_safe(vmx, tmp, blocked_vcpus, > pi_blocked_vcpu_list) > > + { > > + if ( pi_test_on(&vmx->pi_desc) ) > > + { > > + list_del_init(&vmx->pi_blocked_vcpu_list); > > Why not just list_del() (also further down in the second loop)? What is the bad effect for list_del_init() here? > > > + > > + /* > > + * We cannot call vcpu_unblock here, since it also needs > > + * 'pi_blocked_vcpu_lock', we store the vCPUs with ON > > + * set in another list and unblock them after we release > > + * 'pi_blocked_vcpu_lock'. > > + */ > > At least at this point in time this doesn't appear to be correct: I > cannot see how, after this patch, vcpu_unblock() would end up > acquiring the lock. vcpu_unblock() --> vcpu_wake() --> arch_vcpu_wake_prepare() --> spin_lock_irqsave(&per_cpu(pi_blocked_vcpu_lock, v->arch.hvm_vmx.pi_block_cpu), flags); > And without that (plus without anything ever > adding to pi_blocked_vcpu), this moving between two lists seems > rather odd; I think the splitting of the series into patches needs > to be re-thought unless (preferably) you can find a (reasonably > clean) way without using two lists in the first place. Do you mean if using two lists here, I need re-split this patch-set? I wonder why the two lists affect the patchset splitting? Thanks, Feng > > > + list_add_tail(&vmx->pi_vcpu_on_set_list, &list); > > + } > > + } > > + > > + spin_unlock(lock); > > + > > + ack_APIC_irq(); > > So why here and not further up or further down? If this is "as > early as possible", a comment should say why it can't be moved > further up. > > > + list_for_each_entry_safe(vmx, tmp, &list, pi_vcpu_on_set_list) > > + { > > + v = container_of(vmx, struct vcpu, arch.hvm_vmx); > > + list_del_init(&vmx->pi_vcpu_on_set_list); > > + vcpu_unblock(v); > > + } > > + > > + this_cpu(irq_count)++; > > This would probably better pair with ack_APIC_irq(). > > Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |