[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-ia64-devel] [PATCH] Fix SMP-unsafe with XENMEM_add_to_physmap on HVM



Hi Kouya.

Doesn't flush_tlb_for_log_dirty() have same issue?

As a minor code nit.
Could it be moved the new code fragment under the directory,
arch/ia64/vmx?
It's somewhat inconsistent to define a vmx prefixed function
under the directory, arch/ia64/xen.

something like:
void domain_flush_vtlb_all(struct domain* d)
        for_each_vcpu(d, v) {
                ...
                if (VMX_DOMAIN(v))
                        vmx_vcpu_flush_vtlb_all(v)
                else if (v->processor == cpu)
                        ...

thanks,

On Fri, Jul 04, 2008 at 10:40:57AM +0900, Kouya Shimura wrote:
Content-Description: message body text
> XENMEM_add_to_physmap hypercall on HVM is SMP-unsafe
> and may cause a xen crash.
> 
> Actually I've met:
> 
> (XEN) ia64_fault, vector=0x18, ifa=0xe0000165c98814f0, 
> iip=0xf0000000040a1b80, ipsr=0x0000121008226010, isr=0x0000008000000030
> (XEN) General Exception: IA-64 Reserved Register/Field fault (data access).
> ...
> (XEN) ****************************************
> (XEN) Panic on CPU 2:
> (XEN) Fault in Xen.
> (XEN) ****************************************
> 
> This patch fixes it.
> 
> Signed-off-by: Kouya Shimura <kouya@xxxxxxxxxxxxxx>
> 

> diff -r 08f77df14cba xen/arch/ia64/xen/vhpt.c
> --- a/xen/arch/ia64/xen/vhpt.c        Wed Jul 02 11:30:37 2008 +0900
> +++ b/xen/arch/ia64/xen/vhpt.c        Fri Jul 04 09:35:32 2008 +0900
> @@ -243,37 +243,34 @@
>       }
>  }
>  
> +static void __vmx_vcpu_flush_vtlb_all(void *arg)
> +{
> +     struct vcpu *v = arg;
> +
> +     BUG_ON(vcpu_runnable(v) || v->is_running);
> +     thash_purge_all(v);
> +}
> +
>  // SMP: we can't assume v == current, vcpu might move to another physical 
> cpu.
>  // So memory barrier is necessary.
>  // if we can guranttee that vcpu can run on only this physical cpu
>  // (e.g. vcpu == current), smp_mb() is unnecessary.
>  void vcpu_flush_vtlb_all(struct vcpu *v)
>  {
> -     if (VMX_DOMAIN(v)) {
> -             /* This code may be call for remapping shared_info and
> -                grant_table share page from guest_physmap_remove_page()
> -                in arch_memory_op() XENMEM_add_to_physmap to realize
> -                PV-on-HVM feature. */
> -             /* FIXME: This is not SMP-safe yet about p2m table */
> -             /* Purge vTLB for VT-i domain */
> -             thash_purge_all(v);
> -     }
> -     else {
> -             /* First VCPU tlb.  */
> -             vcpu_purge_tr_entry(&PSCBX(v,dtlb));
> -             vcpu_purge_tr_entry(&PSCBX(v,itlb));
> -             smp_mb();
> +     /* First VCPU tlb.  */
> +     vcpu_purge_tr_entry(&PSCBX(v,dtlb));
> +     vcpu_purge_tr_entry(&PSCBX(v,itlb));
> +     smp_mb();
>  
> -             /* Then VHPT.  */
> -             if (HAS_PERVCPU_VHPT(v->domain))
> -                     vcpu_vhpt_flush(v);
> -             else
> -                     local_vhpt_flush();
> -             smp_mb();
> +     /* Then VHPT.  */
> +     if (HAS_PERVCPU_VHPT(v->domain))
> +             vcpu_vhpt_flush(v);
> +     else
> +             local_vhpt_flush();
> +     smp_mb();
>  
> -             /* Then mTLB.  */
> -             local_flush_tlb_all();
> -     }
> +     /* Then mTLB.  */
> +     local_flush_tlb_all();
>  
>       /* We could clear bit in d->domain_dirty_cpumask only if domain d in
>          not running on this processor.  There is currently no easy way to
> @@ -296,6 +293,30 @@
>       for_each_vcpu(d, v) {
>               if (!v->is_initialised)
>                       continue;
> +
> +             if (VMX_DOMAIN(v)) {
> +                     /*
> +                      * This code may be call for remapping shared_info
> +                      * and grant_table from guest_physmap_remove_page()
> +                      * in arch_memory_op() XENMEM_add_to_physmap to realize
> +                      * PV-on-HVM feature.
> +                      */
> +
> +                     if (v == current) {
> +                             /* Purge vTLB for VT-i domain */
> +                             thash_purge_all(v);
> +                             continue;
> +                     }
> +
> +                     vcpu_pause(v);
> +                     if (v->processor == cpu)
> +                             thash_purge_all(v);
> +                     else
> +                             smp_call_function_single(v->processor,
> +                                     __vmx_vcpu_flush_vtlb_all, v, 1, 1);
> +                     vcpu_unpause(v);
> +                     continue;
> +             }
>  
>               if (v->processor == cpu)
>                       vcpu_flush_vtlb_all(v);

> _______________________________________________
> Xen-ia64-devel mailing list
> Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-ia64-devel

-- 
yamahata

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.