[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 6/6] Change MMU_PT_UPDATE_RESERVE_AD to support update page table for foreign domain


  • To: "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx>
  • From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
  • Date: Tue, 30 Jun 2009 15:59:39 +0100
  • Cc: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Tue, 30 Jun 2009 08:00:19 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: Acnh4J83XJstlRtsS9u8RG96aAW7awAoIXfQAAM2lcAAAXUgAAABlLc2AABB5yAAAJ4QawWunuBAAA7QeGU=
  • Thread-topic: [Xen-devel] [PATCH 6/6] Change MMU_PT_UPDATE_RESERVE_AD to support update page table for foreign domain

This doesn't even build. You compare a pointer with an integer.

 -- Keir

On 30/06/2009 08:58, "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx> wrote:

> Keir, this is the updated version, please have a look on it.
> 
> Thanks
> Yunhong Jiang
> 
> Currently MMU_PT_UPDATE_RESERVE_AD support only update page table for current
> domain. This patch add support for foreign domain also.
> 
> Signed-off-by: Jiang, Yunhong <yunhong.jiang@xxxxxxxxx>
> 
> diff -r 9a74617c4a28 xen/arch/x86/mm.c
> --- a/xen/arch/x86/mm.c Tue Jun 30 01:44:30 2009 +0800
> +++ b/xen/arch/x86/mm.c Tue Jun 30 01:51:15 2009 +0800
> @@ -110,6 +110,7 @@
>  #include <asm/hypercall.h>
>  #include <asm/shared.h>
>  #include <public/memory.h>
> +#include <public/sched.h>
>  #include <xsm/xsm.h>
>  #include <xen/trace.h>
>  
> @@ -2999,8 +3000,9 @@ int do_mmu_update(
>      unsigned long gpfn, gmfn, mfn;
>      struct page_info *page;
>      int rc = 0, okay = 1, i = 0;
> -    unsigned int cmd, done = 0;
> -    struct domain *d = current->domain;
> +    unsigned int cmd, done = 0, pt_dom;
> +    struct domain *d = current->domain, *pt_owner = NULL;
> +    struct vcpu *v = current;
>      struct domain_mmap_cache mapcache;
>  
>      if ( unlikely(count & MMU_UPDATE_PREEMPTED) )
> @@ -3018,7 +3020,28 @@ int do_mmu_update(
>          goto out;
>      }
>  
> -    if ( !set_foreigndom(foreigndom) )
> +    pt_dom = foreigndom >> 16;
> +    if (pt_dom)
> +    {
> +        /* the page table belongs to foreign domain */
> +        pt_owner = rcu_lock_domain_by_id(pt_dom - 1);
> +        if (!pt_owner )
> +        {
> +            rc = -EINVAL;
> +            goto out;
> +        }
> +        if( !IS_PRIV_FOR(d, pt_owner) )
> +        {
> +            rc = -ESRCH;
> +            goto out;
> +        }
> +        if (pt_dom == d)
> +            rcu_unlock_domain(pt_owner);
> +        v = pt_owner->vcpu[0];
> +    } else
> +        pt_owner = d;
> +
> +    if ( !set_foreigndom(foreigndom & 0xFFFFU) )
>      {
>          rc = -ESRCH;
>          goto out;
> @@ -3059,9 +3082,9 @@ int do_mmu_update(
>  
>              req.ptr -= cmd;
>              gmfn = req.ptr >> PAGE_SHIFT;
> -            mfn = gmfn_to_mfn(d, gmfn);
> -
> -            if ( unlikely(!get_page_from_pagenr(mfn, d)) )
> +            mfn = gmfn_to_mfn(pt_owner, gmfn);
> +
> +            if ( unlikely(!get_page_from_pagenr(mfn, pt_owner)) )
>              {
>                  MEM_LOG("Could not get page for normal update");
>                  break;
> @@ -3080,24 +3103,21 @@ int do_mmu_update(
>                  {
>                      l1_pgentry_t l1e = l1e_from_intpte(req.val);
>                      okay = mod_l1_entry(va, l1e, mfn,
> -                                        cmd == MMU_PT_UPDATE_PRESERVE_AD,
> -                                        current);
> +                                        cmd == MMU_PT_UPDATE_PRESERVE_AD, v);
>                  }
>                  break;
>                  case PGT_l2_page_table:
>                  {
>                      l2_pgentry_t l2e = l2e_from_intpte(req.val);
>                      okay = mod_l2_entry(va, l2e, mfn,
> -                                        cmd == MMU_PT_UPDATE_PRESERVE_AD,
> -                                        current);
> +                                        cmd == MMU_PT_UPDATE_PRESERVE_AD, v);
>                  }
>                  break;
>                  case PGT_l3_page_table:
>                  {
>                      l3_pgentry_t l3e = l3e_from_intpte(req.val);
>                      rc = mod_l3_entry(va, l3e, mfn,
> -                                      cmd == MMU_PT_UPDATE_PRESERVE_AD, 1,
> -                                      current);
> +                                      cmd == MMU_PT_UPDATE_PRESERVE_AD, 1,
> v);
>                      okay = !rc;
>                  }
>                  break;
> @@ -3106,8 +3126,7 @@ int do_mmu_update(
>                  {
>                      l4_pgentry_t l4e = l4e_from_intpte(req.val);
>                      rc = mod_l4_entry(va, l4e, mfn,
> -                                      cmd == MMU_PT_UPDATE_PRESERVE_AD, 1,
> -                                      current);
> +                                      cmd == MMU_PT_UPDATE_PRESERVE_AD, 1,
> v);
>                      okay = !rc;
>                  }
>                  break;
> @@ -3115,7 +3134,7 @@ int do_mmu_update(
>                  case PGT_writable_page:
>                      perfc_incr(writable_mmu_updates);
>                      okay = paging_write_guest_entry(
> -                        current, va, req.val, _mfn(mfn));
> +                        v, va, req.val, _mfn(mfn));
>                      break;
>                  }
>                  page_unlock(page);
> @@ -3126,7 +3145,7 @@ int do_mmu_update(
>              {
>                  perfc_incr(writable_mmu_updates);
>                  okay = paging_write_guest_entry(
> -                    current, va, req.val, _mfn(mfn));
> +                    v, va, req.val, _mfn(mfn));
>                  put_page_type(page);
>              }
>  
> @@ -3191,6 +3210,9 @@ int do_mmu_update(
>      perfc_add(num_page_updates, i);
>  
>   out:
> +    if (pt_owner && (pt_owner != d))
> +        rcu_unlock_domain(pt_owner);
> +
>      /* Add incremental work we have done to the @done output parameter. */
>      if ( unlikely(!guest_handle_is_null(pdone)) )
>      {
> diff -r 9a74617c4a28 xen/include/public/xen.h
> --- a/xen/include/public/xen.h Tue Jun 30 01:44:30 2009 +0800
> +++ b/xen/include/public/xen.h Tue Jun 30 01:44:33 2009 +0800
> @@ -166,10 +166,14 @@ DEFINE_XEN_GUEST_HANDLE(xen_pfn_t);
>   * ptr[1:0] specifies the appropriate MMU_* command.
>   * 
>   * ptr[1:0] == MMU_NORMAL_PT_UPDATE:
> - * Updates an entry in a page table. If updating an L1 table, and the new
> + * Updates an entry in a page table.
> + * The page table can be owned by current domain, or a foreign domain. If
> + * the page table belongs to a foreign domain, it should be specified in
> + * upper 16 bits in FD
> + * If updating an L1 table, and the new
>   * table entry is valid/present, the mapped frame must belong to the FD, if
> - * an FD has been specified. If attempting to map an I/O page then the
> - * caller assumes the privilege of the FD.
> + * an foreign domain specified in lower 16 btis in FD. If attempting to map
> + * an I/O page then the caller assumes the privilege of the FD.
>   * FD == DOMID_IO: Permit /only/ I/O mappings, at the priv level of the
> caller.
>   * FD == DOMID_XEN: Map restricted areas of Xen's heap space.
>   * ptr[:2]  -- Machine address of the page-table entry to modify.
> 
> 
> Keir Fraser wrote:
>> On 01/06/2009 10:35, "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx> wrote:
>> 
>>> I think I missed this logic, I just make sure the domain is
>>> suspended, but seems be wrong because a suspended domain can still
>>> be killed and relinquish resources. So I will add this logic and
>>> resend the patch.
>>> 
>>> But I'm not sure if we need make sure the guest is suspended/paused
>>> during these functions, as exchange a page or pte in flight may
>>> cause trouble. Or we can leave it to user space tools, since this
>>> situation will not damage xen HV itself.
>> 
>> The HV should not bother to do the suspend check. It is a higher-level
>> problem which should be checked by the tools.
>> 
>> -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.