[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [V3 PATCH 3/9] PVH dom0: move some pv specific code to static functions



>>> On 27.11.13 at 03:27, Mukesh Rathor <mukesh.rathor@xxxxxxxxxx> wrote:
> In this preparatory patch also, some pv specific code is
> carved out into static functions. No functionality change.
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@xxxxxxxxxx>

Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>

albeit with a few minor comments:

> +/* Pages that are part of page tables must be read only. */

This and the other function's comment seem to be a better fit if
they remained at the call sites.

> +static __init void mark_pv_pt_pages_rdonly(struct domain *d,
> +                                           l4_pgentry_t *l4start,
> +                                           unsigned long vpt_start,
> +                                           unsigned long nr_pt_pages)
> +{
> +    unsigned long count;
> +    struct page_info *page;
> +    l4_pgentry_t *pl4e;
> +    l3_pgentry_t *pl3e;
> +    l2_pgentry_t *pl2e;
> +    l1_pgentry_t *pl1e;
> +
> +    pl4e = l4start + l4_table_offset(vpt_start);
> +    pl3e = l4e_to_l3e(*pl4e);
> +    pl3e += l3_table_offset(vpt_start);
> +    pl2e = l3e_to_l2e(*pl3e);
> +    pl2e += l2_table_offset(vpt_start);
> +    pl1e = l2e_to_l1e(*pl2e);
> +    pl1e += l1_table_offset(vpt_start);
> +    for ( count = 0; count < nr_pt_pages; count++ )
> +    {
> +        l1e_remove_flags(*pl1e, _PAGE_RW);
> +        page = mfn_to_page(l1e_get_pfn(*pl1e));
> +
> +        /* Read-only mapping + PGC_allocated + page-table page. */
> +        page->count_info         = PGC_allocated | 3;
> +        page->u.inuse.type_info |= PGT_validated | 1;
> +
> +        /* Top-level p.t. is pinned. */
> +        if ( (page->u.inuse.type_info & PGT_type_mask) ==
> +             (!is_pv_32on64_domain(d) ?
> +              PGT_l4_page_table : PGT_l3_page_table) )
> +        {
> +            page->count_info        += 1;
> +            page->u.inuse.type_info += 1 | PGT_pinned;
> +        }
> +
> +        /* Iterate. */
> +        if ( !((unsigned long)++pl1e & (PAGE_SIZE - 1)) )
> +        {
> +            if ( !((unsigned long)++pl2e & (PAGE_SIZE - 1)) )
> +            {
> +                if ( !((unsigned long)++pl3e & (PAGE_SIZE - 1)) )
> +                    pl3e = l4e_to_l3e(*++pl4e);
> +                pl2e = l3e_to_l2e(*pl3e);
> +            }
> +            pl1e = l2e_to_l1e(*pl2e);
> +        }
> +    }
> +}
> +
> +/* Set up the phys->machine table if not part of the initial mapping. */

Even more so here, since the second half of the comment is actually
referring to code that you left at the call site.

> +static __init void setup_pv_physmap(struct domain *d, unsigned long 
> pgtbl_pfn,
> +                                    unsigned long v_start, unsigned long 
> v_end,
> +                                    unsigned long vphysmap_start,
> +                                    unsigned long vphysmap_end,
> +                                    unsigned long nr_pages)
> +{
> +    struct page_info *page = NULL;
> +    l4_pgentry_t *pl4e = NULL, *l4start;

Pointless initializer.

> +    l3_pgentry_t *pl3e = NULL;
> +    l2_pgentry_t *pl2e = NULL;
> +    l1_pgentry_t *pl1e = NULL;
> +
> +    l4start = map_domain_page(pgtbl_pfn);

Instead, this one could become the initializer of l4start.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.