[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [V10 PATCH 0/4] pvh dom0 patches...



On Fri, 2 May 2014 13:05:23 +0200
Roger Pau Monnà <roger.pau@xxxxxxxxxx> wrote:

> On 01/05/14 03:19, Mukesh Rathor wrote:
> > On Wed, 30 Apr 2014 11:12:16 -0700
> > Mukesh Rathor <mukesh.rathor@xxxxxxxxxx> wrote:
> > 
> >> On Wed, 30 Apr 2014 16:11:39 +0200
> >> Roger Pau Monnà <roger.pau@xxxxxxxxxx> wrote:
> >>
> >>> On 30/04/14 03:06, Mukesh Rathor wrote:
> >> .....
> >>
> >>> Hello Mukesh,
> >>>
> >>> Thanks for the new version, unfortunately when trying to boot
> >>> FreeBSD Dom0 with this version I get the following hypervisor
> >>> crash (it works fine with previous versions):
> >>
> >> Aha, Jan, there's the vioapic crash!! Roger, see:
> >>
> >> http://www.gossamer-threads.com/lists/xen/devel/325784
> >>
> >> I had seen this few weeks ago, but could not reproduce last week 
> >> despite several attempts. You are seeing this in V10 because I
> >> dropped the vioapic patch from V9 (included below).
> >>
> >> BTW, since I'm not able to reproduce this, can you kindly check
> >> where the ept violation is coming from? Is that on an io space?
> >> Also, our binaries don't match, so can you please confirm it's the 
> >> call from:
> >>
> >> hvm_hap_nested_page_fault():
> >>     if ( (p2mt == p2m_mmio_dm) ||
> >>          (access_w && (p2mt == p2m_ram_ro)) )
> >>     {
> >>         put_gfn(p2m->domain, gfn);
> >>         if ( !handle_mmio() )   <==========
> >>             hvm_inject_hw_exception(TRAP_gp_fault, 0);
> >>
> >> In which case, what's the p2mt?
> >>
> > 
> > Hey Roger,
> > 
> > I tried few things, but still could not reproduce. I saw it few
> > weeks ago, and I think I misread the code thinking
> > hvm_hap_nested_page_fault was calling handle_mmio unconditionally,
> > and quickly came up with the vioapic patch for v9. 
> > 
> > So, can you please try with the vioapic patch. Then two things will
> > happen:
> > 
> >   1. The ept violation is genuine, in which case it will return back
> >      successfully to ept_handle_violation which will print the
> > gfn/mfn info for further debug.
> >   2. the emulation will be handled, in which case we need to know
> > what was it, mmio_dm or ram_ro, and where it came from in dom0?
> > Both are unexpected.
> 
> With the patch applied I can boot fine, no error messages at all. I've
> printed the address that's causing the vioapic_range call, it's
> 0x1073741824, which according to the e820 map passed by Xen falls
> into a region marked as valid memory:
> 
> SMAP type=01 base=0000000000100000 len=000000003ff6e000
> 
> The crash happens because FreeBSD scrubs all valid memory at early
> boot when booted with hw.memtest.tests=1.

Hi Roger,

I think something else is going on here. 
The vioapic address check is fenced by is_hvm check, 

    if ( !nestedhvm_vcpu_in_guestmode(v)
         && is_hvm_vcpu(v)    <====
         && gfn == PFN_DOWN(vlapic_base_address(vcpu_vlapic(v))) )
    {

so the call should be coming from the place I mentioned above.
The p2mt combined with the pfn would hopefully tell whats going on.

Can you kindly remove the vioapic patch, and apply below patch and post
the output from both hvm_hap_nested_page_fault and ept_violation.

thanks
mukesh


index ac05160..dcffc6d 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1667,6 +1667,15 @@ int hvm_hap_nested_page_fault(paddr_t gpa,
          (access_w && (p2mt == p2m_ram_ro)) )
     {
         put_gfn(p2m->domain, gfn);
+
+        if ( is_pvh_vcpu(v) )
+        {
+            printk("hvm_hap_nested_page_fault: gfn:%lx gla:%lx p2mt:%d\n",
+                   gfn, gla, p2mt);
+            rc = 0;
+            goto out;
+        }
+
         if ( !handle_mmio() )
             hvm_inject_hw_exception(TRAP_gp_fault, 0);
         rc = 1;


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.