[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH][3/4] Enable 1GB for Xen HVM host page



At 16:37 +0000 on 23 Feb (1266943030), Huang2, Wei wrote:
> I was hoping that someone else would pick up the 1GB PoD task. :P I
> can try to implement this feature if deemed necessary.

I think it will be OK - PoD is only useful with balloon drivers, which
currently don't even maintain 2MB superpages, so it's probably not worth
engineering up 1GB PoD. 

Tim.

> -Wei
> 
> -----Original Message-----
> From: Tim Deegan [mailto:Tim.Deegan@xxxxxxxxxx] 
> Sent: Tuesday, February 23, 2010 4:07 AM
> To: Huang2, Wei
> Cc: 'xen-devel@xxxxxxxxxxxxxxxxxxx'; Keir Fraser; Xu, Dongxiao
> Subject: Re: [Xen-devel] [PATCH][3/4] Enable 1GB for Xen HVM host page
> 
> At 17:18 +0000 on 22 Feb (1266859128), Wei Huang wrote:
> > This patch changes P2M code to works with 1GB page now.
> > 
> > Signed-off-by: Wei Huang <wei.huang2@xxxxxxx>
> > Acked-by: Dongxiao Xu <dongxiao.xu@xxxxxxxxx>
> 
>  
> > @@ -1064,6 +1093,19 @@
> >      if ( unlikely(d->is_dying) )
> >          goto out_fail;
> >  
> > +    /* Because PoD does not have cache list for 1GB pages, it has to remap
> > +     * 1GB region to 2MB chunks for a retry. */
> > +    if ( order == 18 )
> > +    {
> > +        gfn_aligned = (gfn >> order) << order;
> > +        for( i = 0; i < (1 << order); i += (1 << 9) )
> > +            set_p2m_entry(d, gfn_aligned + i, 
> > _mfn(POPULATE_ON_DEMAND_MFN), 9,
> > +                          p2m_populate_on_demand);
> 
> I think you only need one set_p2m_entry call here - it will split the
> 1GB entry without needing another 511 calls.
> 
> Was the decision not to implement populate-on-demand for 1GB pages based
> on not thinking it's a good idea or not wanting to do the work? :)
> How much performance do PoD guests lose by not having it?
> 
> > +        audit_p2m(d);
> > +        p2m_unlock(p2md);
> > +        return 0;
> > +    }
> > +
> >      /* If we're low, start a sweep */
> >      if ( order == 9 && page_list_empty(&p2md->pod.super) )
> >          p2m_pod_emergency_sweep_super(d);
> > @@ -1196,6 +1238,7 @@
> >      l1_pgentry_t *p2m_entry;
> >      l1_pgentry_t entry_content;
> >      l2_pgentry_t l2e_content;
> > +    l3_pgentry_t l3e_content;
> >      int rv=0;
> >  
> >      if ( tb_init_done )
> > @@ -1222,18 +1265,44 @@
> >          goto out;
> >  #endif
> >      /*
> > +     * Try to allocate 1GB page table if this feature is supported.
> > +     *
> >       * When using PAE Xen, we only allow 33 bits of pseudo-physical
> >       * address in translated guests (i.e. 8 GBytes).  This restriction
> >       * comes from wanting to map the P2M table into the 16MB RO_MPT hole
> >       * in Xen's address space for translated PV guests.
> >       * When using AMD's NPT on PAE Xen, we are restricted to 4GB.
> >       */
> 
> Please move this comment closer to the code it describes.  
> 
> Also maybe a BUG_ON(CONFIG_PAGING_LEVELS == 3) in the order-18 case
> would be useful, since otherwise it looks like order-18 allocations are
> exempt from the restriction.
> 
> Actually, I don't see where you enforce that - do you?
> 
> Tim.
> 
> 
> -- 
> Tim Deegan <Tim.Deegan@xxxxxxxxxx>
> Principal Software Engineer, XenServer Engineering
> Citrix Systems UK Ltd.  (Company #02937203, SL9 0BG)
> 
> 

-- 
Tim Deegan <Tim.Deegan@xxxxxxxxxx>
Principal Software Engineer, XenServer Engineering
Citrix Systems UK Ltd.  (Company #02937203, SL9 0BG)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.