[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v7 03/15] x86/mm: rewrite virt_to_xen_l*e

On 27.07.2020 11:09, Hongyan Xia wrote:
On Tue, 2020-07-14 at 12:47 +0200, Jan Beulich wrote:
On 29.05.2020 13:11, Hongyan Xia wrote:
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -291,7 +291,13 @@ void copy_page_sse2(void *, const void *);
  #define pfn_to_paddr(pfn)   __pfn_to_paddr(pfn)
  #define paddr_to_pfn(pa)    __paddr_to_pfn(pa)
  #define paddr_to_pdx(pa)    pfn_to_pdx(paddr_to_pfn(pa))
vmap_to_mfn(va)     _mfn(l1e_get_pfn(*virt_to_xen_l1e((unsigned
+#define vmap_to_mfn(va)
({                                                  \
+        const l1_pgentry_t *pl1e_ = virt_to_xen_l1e((unsigned
long)(va));   \
+        mfn_t mfn_ =
l1e_get_mfn(*pl1e_);                                   \
+        unmap_domain_page(pl1e_);
+        mfn_; })

Just like is already the case in domain_page_map_to_mfn() I think
you want to add "BUG_ON(!pl1e)" here to limit the impact of any
problem to DoS (rather than a possible privilege escalation).

Or actually, considering the only case where virt_to_xen_l1e()
would return NULL, returning INVALID_MFN here would likely be
even more robust. There looks to be just a single caller, which
would need adjusting to cope with an error coming back. In fact -
it already ASSERT()s, despite NULL right now never coming back
from vmap_to_page(). I think the loop there would better be

     for ( i = 0; i < pages; i++ )
         struct page_info *page = vmap_to_page(va + i * PAGE_SIZE);

         if ( page )
             page_list_add(page, &pg_list);


To be honest, I think the current implementation of vmap_to_mfn() is
just incorrect. There is simply no guarantee that a vmap is mapped with
small pages, so IMO we just cannot do virt_to_xen_x1e() here. The
correct way is to have a generic page table walking function which
walks from the base and can stop at any level, and properly return code
to indicate level or any error.

I am inclined to BUG_ON() here, and upstream a proper fix later to
vmap_to_mfn() as an individual patch.

Well, yes, in principle large pages can result from e.g. vmalloc()ing
a large enough area. However, rather than thinking of a generic
walking function as a solution, how about the simple one for the
immediate needs: Add MAP_SMALL_PAGES?

Also, as a general remark: When you disagree with review feedback, I
think it would be quite reasonable to wait with sending the next
version until the disagreement gets resolved, unless this is taking
unduly long delays.




Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.