[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v6 16/18] vmx: Add some scheduler hooks for VT-d posted interrupts
>>> On 10.09.15 at 15:27, <feng.wu@xxxxxxxxx> wrote: > >> -----Original Message----- >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx] >> Sent: Thursday, September 10, 2015 9:15 PM >> To: Wu, Feng >> Cc: Andrew Cooper; Dario Faggioli; George Dunlap; Tian, Kevin; >> xen-devel@xxxxxxxxxxxxx; Keir Fraser >> Subject: RE: [PATCH v6 16/18] vmx: Add some scheduler hooks for VT-d posted >> interrupts >> >> >>> On 10.09.15 at 14:58, <feng.wu@xxxxxxxxx> wrote: >> >> > >> >> -----Original Message----- >> >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx] >> >> Sent: Thursday, September 10, 2015 8:45 PM >> >> To: Wu, Feng >> >> Cc: Andrew Cooper; Dario Faggioli; George Dunlap; Tian, Kevin; >> >> xen-devel@xxxxxxxxxxxxx; Keir Fraser >> >> Subject: RE: [PATCH v6 16/18] vmx: Add some scheduler hooks for VT-d >> posted >> >> interrupts >> >> >> >> >>> On 10.09.15 at 14:34, <feng.wu@xxxxxxxxx> wrote: >> >> >> >> > >> >> >> -----Original Message----- >> >> >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx] >> >> >> Sent: Thursday, September 10, 2015 6:01 PM >> >> >> To: Wu, Feng >> >> >> Cc: Andrew Cooper; Dario Faggioli; George Dunlap; Tian, Kevin; >> >> >> xen-devel@xxxxxxxxxxxxx; Keir Fraser >> >> >> Subject: RE: [PATCH v6 16/18] vmx: Add some scheduler hooks for VT-d >> >> posted >> >> >> interrupts >> >> >> >> >> >> >>> On 10.09.15 at 11:41, <feng.wu@xxxxxxxxx> wrote: >> >> >> >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx] >> >> >> >> Sent: Thursday, September 10, 2015 5:26 PM >> >> >> >> >>> On 10.09.15 at 10:59, <feng.wu@xxxxxxxxx> wrote: >> >> >> >> > First, how to check it while waiting to acquire the lock >> >> >> >> > .pi_block_cpu >> >> >> >> > didn't change? >> >> >> >> >> >> >> >> Note the difference between "check while waiting" and "check that >> >> >> >> while waiting": The former is indeed hard to implement, while the >> >> >> >> latter is pretty straightforward (and we do so elsewhere). >> >> >> >> >> >> >> >> > Secondly, even if we can check it, what should we do if >> >> >> >> > .pi_block_cpu >> >> >> >> > is changed after acquiring the lock as I mentioned above? >> >> >> >> >> >> >> >> Drop the lock and start over. I.e. (taking your pseudo code) >> >> >> >> >> >> >> >> restart: >> >> >> >> local_pi_block_cpu = ...; >> >> >> >> bail-if-invalid (e.g. -1 in current model) >> >> >> >> spin_lock_irqsave(&per_cpu(, local_pi_block_cpu), flags); >> >> >> >> if(local_pi_block_cpu != actual_pi_block_cpu) { >> >> >> >> spin_unlock_irqrestore(&per_cpu(,local_pi_block_cpu), >> flags); >> >> >> >> goto restart; >> >> >> >> } >> >> >> > >> >> >> > Thanks a lot for showing me this pseudo code! My concern is if >> >> >> > .pi_block_vcpu is changed to -1 at this point, it doesn't work. >> >> >> > .pi_block_vcpu being -1 here means the vCPU is remove from >> >> >> > the blocking list by others, then we cannot delete it again via >> >> >> > list_del() here. >> >> >> >> >> >> Did you miss the "bail-if-invalid" above? >> >> > >> >> > I am sorry, do I miss something here? If .pi_block_cpu becomes >> >> > -1 here (after the above 'if' statement is finished with >> >> > local_pi_block_cpu == actual_pi_block_cpu ), how can "bail-if-invalid" >> >> > above help? >> >> >> >> The (obvious I thought) implication is that all assignments to >> >> pi_block_cpu (along with all list manipulations) now need to happen >> >> with the lock held. >> > >> > If all the assignment to pi_block_cpu is with one lock held, I don't think >> > we need to above checkout, we can safely use .pi_block_cpu, right? >> >> No. In order to use it you need to make sure it's valid (or else >> there's no valid lock to acquire). And once you acquired the >> lock, you have to check whether changed / became invalid. > > If all the assignment to .pi_block_cpu is with one lock held, > how can it get changed / become invalid then once I acquired > the lock? Again - if pi_block_cpu is invalid, which lock do you want to acquire? Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |