[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] Re: One (possible) x86 get_user_pages bug
On 01/31/2011 12:10 PM, Kaushik Barde wrote: > << I'm not sure I follow you here. The issue with TLB flush IPIs is that > the hypervisor doesn't know the purpose of the IPI and ends up > (potentially) waking up a sleeping VCPU just to flush its tlb - but > since it was sleeping there were no stale TLB entries to flush.>> > > That's what I was trying understand, what is "Sleep" here? Is it ACPI sleep > or some internal scheduling state? If vCPUs are asynchronous to pCPU in > terms of ACPI sleep state, then they need to synced-up. That's where entire > ACPI modeling needs to be considered. That's where KVM may not see this > issue. Maybe I am missing something here. No, nothing to do with ACPI. Multiple virtual CPUs (VCPUs) can be multiplexed onto a single physical CPU (PCPU), in much the same way as tasks are scheduled onto CPUs (identically, in KVM's case). If a VCPU is not currently running - either because it is simply descheduled, or because it is blocked (what I slightly misleadingly called "sleeping" above) in a hypercall, then it is not currently using any physical CPU resources, including the TLBs. In that case, there's no need to flush that's VCPU's TLB entries, because there are none. > << A "few hundred uSecs" is really very slow - that's nearly a > millisecond. It's worth spending some effort to avoid those kinds of > delays.>> > > Actually, just checked IPIs are usually 1000-1500 cycles long (comparable to > VMEXIT). My point is ideal solution should be where virtual platform > behavior is closer to bare metal interrupts, memory, cpu state etc.. How to > do it ? well that's what needs to be figured out :-) The interesting number is not the raw cost of an IPI, but the overall cost of the remote TLB flush. J _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |