[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 6/6] x86/mem-paging: consistently use gfn_t



Hi Jan,

On 16/04/2020 16:48, Jan Beulich wrote:
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2151,16 +2151,17 @@ static int mod_l1_entry(l1_pgentry_t *pl
               paging_mode_translate(pg_dom) )
          {
              p2m_type_t p2mt;
+            unsigned long gfn = l1e_get_pfn(nl1e);

How about making gfn a gfn_t directly? This would avoid code churn when...

              p2m_query_t q = l1e_get_flags(nl1e) & _PAGE_RW ?
                              P2M_ALLOC | P2M_UNSHARE : P2M_ALLOC;
- page = get_page_from_gfn(pg_dom, l1e_get_pfn(nl1e), &p2mt, q);
+            page = get_page_from_gfn(pg_dom, gfn, &p2mt, q);

... I am going to convert get_page_from_gfn() to use typesafe gfn. See [1].

@@ -89,16 +88,15 @@ void p2m_mem_paging_drop_page(struct dom
   * already sent to the pager. In this case the caller has to try again until 
the
   * gfn is fully paged in again.
   */
-void p2m_mem_paging_populate(struct domain *d, unsigned long gfn_l)
+void p2m_mem_paging_populate(struct domain *d, gfn_t gfn)
  {
      struct vcpu *v = current;
      vm_event_request_t req = {
          .reason = VM_EVENT_REASON_MEM_PAGING,
-        .u.mem_paging.gfn = gfn_l
+        .u.mem_paging.gfn = gfn_x(gfn)
      };
      p2m_type_t p2mt;
      p2m_access_t a;
-    gfn_t gfn = _gfn(gfn_l);
      mfn_t mfn;
      struct p2m_domain *p2m = p2m_get_hostp2m(d);
      int rc = vm_event_claim_slot(d, d->vm_event_paging);
@@ -107,7 +105,7 @@ void p2m_mem_paging_populate(struct doma
      if ( rc == -EOPNOTSUPP )
      {
          gdprintk(XENLOG_ERR, "Dom%d paging gfn %lx yet no ring in place\n",
-                 d->domain_id, gfn_l);
+                 d->domain_id, gfn_x(gfn));

Please use PRI_gfn in the format string to match the argument change.

Cheers,

[1] https://lore.kernel.org/xen-devel/20200322161418.31606-18-julien@xxxxxxx/

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.