[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [V10 PATCH 0/4] pvh dom0 patches...



On Mon, 5 May 2014 10:52:54 +0200
Roger Pau Monnà <roger.pau@xxxxxxxxxx> wrote:

> On 03/05/14 02:01, Mukesh Rathor wrote:
> > On Fri, 2 May 2014 13:05:23 +0200
> > Roger Pau Monnà <roger.pau@xxxxxxxxxx> wrote:
> > 
> >> On 01/05/14 03:19, Mukesh Rathor wrote:
> >>> On Wed, 30 Apr 2014 11:12:16 -0700
> >>> Mukesh Rathor <mukesh.rathor@xxxxxxxxxx> wrote:
> >>>
> >>>> On Wed, 30 Apr 2014 16:11:39 +0200
> >>>> Roger Pau Monnà <roger.pau@xxxxxxxxxx> wrote:
> >>>>
> >>>>> On 30/04/14 03:06, Mukesh Rathor wrote:
> >>>> .....
> >>>>
> >>>>> Hello Mukesh,
> >>>>>
.......
> >> With the patch applied I can boot fine, no error messages at all.
> >> I've printed the address that's causing the vioapic_range call,
> >> it's 0x1073741824, which according to the e820 map passed by Xen
> >> falls into a region marked as valid memory:
> >>
> >> SMAP type=01 base=0000000000100000 len=000000003ff6e000
> >>
> >> The crash happens because FreeBSD scrubs all valid memory at early
> >> boot when booted with hw.memtest.tests=1.
> > 
> > Hi Roger,
> 
> Hello Mukesh, thanks for the help.
> 
> > I think something else is going on here. 
> > The vioapic address check is fenced by is_hvm check, 
> > 
> >     if ( !nestedhvm_vcpu_in_guestmode(v)
> >          && is_hvm_vcpu(v)    <====
> >          && gfn == PFN_DOWN(vlapic_base_address(vcpu_vlapic(v))) )
> >     {
> 
> AFAIK this is not the path that's causing the fault, the fault comes
> from:
> 
>     if ( (p2mt == p2m_mmio_dm) ||
>          (access_w && (p2mt == p2m_ram_ro)) )
>     {
>         put_gfn(p2m->domain, gfn);
>         if ( !handle_mmio() ) <=====
>             hvm_inject_hw_exception(TRAP_gp_fault, 0);
>         rc = 1;
>         goto out;
>     }
> 
> This was happening because I was trying to access a gpfn from outside
> of the p2m map, which didn't have a valid mfn. The type of the page
> was p2m_mmio_dm, the access p2m_access_n and the mfn was not valid
> (I've done a p2m->get_entry on the faulting address).

Ok, I know what's going on. By default, the p2m type returned is
p2m_mmio_dm. I'll resubmit my vioapic patch. But, your real issue
here is pages released from punched holes not added back. See below
for that. Once you fix that, you'll not see this, unless some other
kernel bug causing ept violation.

> This was because I was using start_info->nr_pages as the number of
> usable RAM pages, but AFAICT from the code in domain_build.c,
> pvh_map_all_iomem is making holes in the p2m, but it is not adding
> those freed pages back to the end of the memory map, so the value in
> nr_pages is not the number of usable RAM pages, but the number of
> pages in the p2m map (taking into account both usable RAM pages and
> p2m_mmio_direct regions).
> 
> I'm not sure if this logic is correct, shouldn't the freed pages by
> pvh_map_all_iomem be added to the end of the memory map?

Yeah, it's confusing a bit. Let me talk in terms of linux.

In case of PV dom0, linux parses the e820 and punches holes in
the p2m itself. See xen_set_identity_and_release(). For pvh, we
can skip some of that (since it already happened in xen), but we still 
use the "released" variable to keep track of that. Then later, 
xen_populate_chunk() adds those "released" pages back via 
XENMEM_populate_physmap. This happens for both PV and PVH, so the 
pages are added back. 

xen_memory_setup():
        /*
         * Populate back the non-RAM pages and E820 gaps that had been
         * released. */
        populated = xen_populate_chunk(map, memmap.nr_entries,
                        max_pfn, &last_pfn, xen_released_pages);


Perhaps your logic that does similar for PV needs to make sure it is 
populating the pages back for pvh also?  You just don't need to 
punch holes in the p2m as you do for PV, for PVH you can skip that 
part.  Hope that makes sense.

FWIW, my very first patch didn't do that in xen, but was done in linux
same as for PV. This required a new hypercall. But several maintainers 
felt we should map all iomem in xen upfront.

thanks
mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.