[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen-unstable 4.8: HVM domain_crash called from emulate.c:144 RIP: c000:[<000000000000336a>]



>>> On 15.06.16 at 16:32, <boris.ostrovsky@xxxxxxxxxx> wrote:
> So perhaps we shouldn't latch data for anything over page size.

But why? What we latch is the start of the accessed range, so
the repeat count shouldn't matter?

> Something like this (it seems to work):

I'm rather hesitant to take a change like this without understanding
why this helps nor whether this really deals with the problem in all
cases.

Jan

> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -1195,7 +1195,8 @@ static int hvmemul_rep_movs(
>          if ( rc != X86EMUL_OKAY )
>              return rc;
>  
> -        latch_linear_to_phys(vio, saddr, sgpa, 0);
> +        if ( *reps * bytes_per_rep <= PAGE_SIZE)
> +            latch_linear_to_phys(vio, saddr, sgpa, 0);
>      }
>  
>      bytes = PAGE_SIZE - (daddr & ~PAGE_MASK);
> @@ -1214,7 +1215,8 @@ static int hvmemul_rep_movs(
>          if ( rc != X86EMUL_OKAY )
>              return rc;
>  
> -        latch_linear_to_phys(vio, daddr, dgpa, 1);
> +        if ( *reps * bytes_per_rep <= PAGE_SIZE)
> +            latch_linear_to_phys(vio, daddr, dgpa, 1);
>      }
>  
>      /* Check for MMIO ops */
> @@ -1339,7 +1341,8 @@ static int hvmemul_rep_stos(
>          if ( rc != X86EMUL_OKAY )
>              return rc;
>  
> -        latch_linear_to_phys(vio, addr, gpa, 1);
> +        if ( *reps * bytes_per_rep <= PAGE_SIZE)
> +            latch_linear_to_phys(vio, addr, gpa, 1);
>      }
>  
>      /* Check for MMIO op */




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.