|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v1 2/8]: PVH mmu changes
On Mon, 24 Sep 2012 12:55:31 +0100
Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx> wrote:
> There are few code style issues on this patch, I suggest you run it
> through scripts/checkpatch.pl, it should be able to catch all these
> errors.
>
> On Fri, 21 Sep 2012, Mukesh Rathor wrote:
> > ---
> > arch/x86/xen/mmu.c | 180
> > +++++++++++++++++++++++++++++++++++++++++++++++--
> > arch/x86/xen/mmu.h | 2 + include/xen/xen-ops.h | 12 +++-
> > 3 files changed, 188 insertions(+), 6 deletions(-)
> >
> > diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> > long addr,
> > + pte_t *ptep, pte_t pteval)
> > +{
> > + native_set_pte(ptep, pteval);
> > +}
>
> can't you just set set_pte_at to native_set_pte?
yup. oh, i had other code there, which is why it's not already
native_set_pte_at.
>
> > static void xen_set_pte_at(struct mm_struct *mm, unsigned long
> > addr, pte_t *ptep, pte_t pteval)
> > {
> > @@ -1197,6 +1218,10 @@ static void xen_post_allocator_init(void);
> > static void __init xen_pagetable_setup_done(pgd_t *base)
> > {
> > xen_setup_shared_info();
> > +
> > + if (xen_feature(XENFEAT_auto_translated_physmap))
> > + return;
> > +
> > xen_post_allocator_init();
> > }
>
> Can we move the if..return at the beginning of
> xen_post_allocator_init? It would make it easier to understand what
> it is for. Or maybe we could have xen_setup_shared_info take a pgd_t
> *base parameter so that you can set pagetable_setup_done to
> xen_setup_shared_info directly on pvh.
done.
>
> > @@ -1455,6 +1480,10 @@ static void __init xen_set_pte_init(pte_t
> > *ptep, pte_t pte) static void pin_pagetable_pfn(unsigned cmd,
> > unsigned long pfn) {
> > struct mmuext_op op;
> > +
> > + if (xen_feature(XENFEAT_writable_page_tables))
> > + return;
> > +
> > op.cmd = cmd;
> > op.arg1.mfn = pfn_to_mfn(pfn);
> > if (HYPERVISOR_mmuext_op(&op, 1, NULL, DOMID_SELF))
>
> given the very limited set of mmu pvops that we initialize on pvh, I
> cannot find who would actually call pin_pagetable_pfn on pvh.
xen_setup_kernel_pagetable().
>
> > @@ -1652,6 +1681,10 @@ static void set_page_prot(void *addr,
> > pgprot_t prot) unsigned long pfn = __pa(addr) >> PAGE_SHIFT;
> > + /* set_pte* for PCI devices to map iomem. */
> > + if (xen_initial_domain()) {
> > + pv_mmu_ops.set_pte = native_set_pte;
> > + pv_mmu_ops.set_pte_at =
> > xen_dom0pvh_set_pte_at;
> > + }
> > + return;
> > + }
> > +
> > x86_init.mapping.pagetable_reserve =
> > xen_mapping_pagetable_reserve;
> > x86_init.paging.pagetable_setup_start = xen_pagetable_setup_start;
> > - x86_init.paging.pagetable_setup_done =
> > xen_pagetable_setup_done; pv_mmu_ops = xen_mmu_ops;
> >
> > memset(dummy_mapping, 0xff, PAGE_SIZE);
>
> I wonder whether it would make sense to have a xen_pvh_init_mmu_ops,
> given that they end up being so different...
It's not a pv ops call. It's called from xen_start_kernel in
enlighten.c. So lets just leave it like that.
> > + void *data)
> > +{
> > + int rc;
> > + struct pvh_remap_data *remapp = data;
> > + struct xen_pvh_pfn_info *pvhp = remapp->pvhinfop;
> > + unsigned long pfn =
> > page_to_pfn(pvhp->pi_paga[pvhp->pi_next_todo++]);
> > + pte_t pteval = pte_mkspecial(pfn_pte(pfn, remapp->prot));
> > +
> > + if ((rc=pvh_add_to_xen_p2m(pfn, remapp->fgmfn,
> > remapp->domid)))
> > + return rc;
> > + native_set_pte(ptep, pteval);
>
> Do we actually need the pte to be "special"?
> I would think that being in the guest p2m, it is actually quite a
> normal page.
Ah, I remember. The reason I had it special is because earlier I was
allocating pfn's from high up address space. These pfns' were greater
than highest_memmap_pfn. But, now I'm allocating them via ballooing. So
I can change it to normal. If we go back to allocating pfn's space from
high up, then we'd need this back for this check:
vm_normal_page():
if (unlikely(pfn > highest_memmap_pfn)) {
print_bad_pte(vma, addr, pte, NULL);
return NULL;
> > + return 0;
> > +}
> > +
> > +/* The only caller at moment passes one gmfn at a time.
> > + * PVH TBD/FIXME: expand this in future to honor batch requests.
> > + */
> > +static int pvh_remap_gmfn_range(struct vm_area_struct *vma,
> > + unsigned long addr, unsigned long
> > mfn, int nr,
> > + pgprot_t prot, unsigned domid,
> > + struct xen_pvh_pfn_info *pvhp)
> > +{
> > + int err;
> > + struct pvh_remap_data pvhdata;
> > +
> > + if (nr > 1)
> > + return -EINVAL;
> > +
> > + pvhdata.fgmfn = mfn;
> > + pvhdata.prot = prot;
> > + pvhdata.domid = domid;
> > + pvhdata.pvhinfop = pvhp;
> > + err = apply_to_page_range(vma->vm_mm, addr, nr <<
> > PAGE_SHIFT,
> > + pvh_map_pte_fn, &pvhdata);
> > + flush_tlb_all();
>
> Maybe we can use one of the flush_tlb_range varieties instead of a
> flush_tlb_all?
changed it to : flush_tlb_page(vma, addr);
> > +
> > + if (!pvhp || !xen_feature(XENFEAT_auto_translated_physmap))
> > + return 0;
> > +
> > + while (pvhp->pi_next_todo--) {
> > + unsigned long pfn;
> > +
> > + /* the mmu has already cleaned up the process mmu
> > resources at
> > + * this point (lookup_address will return NULL). */
> > + pfn =
> > page_to_pfn(pvhp->pi_paga[pvhp->pi_next_todo]);
> > + pvh_rem_xen_p2m(pfn, 1);
> > + count++;
> > + }
> > + flush_tlb_all();
> > + return count;
>
> Who is going to remove the corresponding mapping from the vma?
> Also we might be able to replace the flush_tlb_all with a
> flush_tlb_range.
the kernel already does that before privcmd_close is called.
don't have the addrs for flush_tlb_range in privcmd_close().
thanks,
Mukesh
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |