[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 02/18] PVH xen: add XENMEM_add_to_physmap_range



On Thu, Jun 06, 2013 at 07:43:09AM +0100, Jan Beulich wrote:
> >>> On 05.06.13 at 22:41, Mukesh Rathor <mukesh.rathor@xxxxxxxxxx> wrote:
> > On Wed, 05 Jun 2013 08:32:52 +0100
> > "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
> > 
> >> >>> On 05.06.13 at 02:31, Mukesh Rathor <mukesh.rathor@xxxxxxxxxx>
> >> >>> wrote:
> >> >> I also vaguely recall having pointed out in a much earlier review
> >> >> that this functionality is lacking a counterpart in
> >> >> compat_arch_memory_op().
> >> > 
> >> > Hmm.. confused how/why a 64bit PVH go thru compat_arch_memory_op()?
> >> > Can you pl explain?
> >> 
> >> Iirc the new hypercall isn't restricted to PVH guests, and hence
> >> needs a compat implementation regardless of 32-bit PVH not
> >> existing yet.
> > 
> > This patch does not introduce the hcall, it was introduced much earlier.
> > It implements the portion needed for 64bit PVH.
> 
> So how is this statement of yours in line with this hunk of the
> patch?
> 
> @@ -4716,6 +4751,39 @@ long arch_memory_op(int op, 
> XEN_GUEST_HANDLE_PARAM(void) arg)
>          return rc;
>      }
>  
> +    case XENMEM_add_to_physmap_range:
> +    {
> +        struct xen_add_to_physmap_range xatpr;
> +        struct domain *d;
> +
> +        if ( copy_from_guest(&xatpr, arg, 1) )
> +            return -EFAULT;
> +
> +        /* This mapspace is redundant for this hypercall */
> +        if ( xatpr.space == XENMAPSPACE_gmfn_range )
> +            return -EINVAL;
> +
> +        d = rcu_lock_domain_by_any_id(xatpr.domid);
> +        if ( d == NULL )
> +            return -ESRCH;
> +
> +        if ( xsm_add_to_physmap(XSM_TARGET, current->domain, d) )
> +        {
> +            rcu_unlock_domain(d);
> +            return -EPERM;
> +        }
> +
> +        rc = xenmem_add_to_physmap_range(d, &xatpr);
> +
> +        rcu_unlock_domain(d);
> +
> +        if ( rc == -EAGAIN )
> +            rc = hypercall_create_continuation(
> +                __HYPERVISOR_memory_op, "ih", op, arg);
> +
> +        return rc;
> +    }
> +
>      case XENMEM_set_memory_map:
>      {
>          struct xen_foreign_memory_map fmap;
> 
> If the hypercall were handled before, adding a new case statement
> to arch_memory_op() would cause a compilation error. All that was

Meaning if it was handled in do_memory_op, right?

> there before this patch was the definition of the hypercall (for ARM),
> but I'm quite strongly opposed to adding x86 support for this
> hypercall only for one half of the possible set of PV (and HVM?)
> guests; the fact that PVH is 64-bit only for the moment has nothing
> to do with this. The only alternative would be to constrain the
> specific sub-hypercall to PVH, but that would be rather contrived,
> so I'm already trying to make clear that I wouldn't accept such a
> solution to the original comment.

Pardon my misunderstanding - what you are saying is that the location
of this hypercall in arch_memory_op is correct - but only if it
works with 32-bit guests as well?

Which would imply at least doing something in compat_arch_memory_op 
(to copy the arguments properly), and in the arch_memory_op
return -ENOSYS if the guest is 32-bit (at least for right now).

In the future the return -ENOSYS would be removed so that 32-bit
guests can use this hypercall.

Or perhaps it make sense to squash the x86 and ARM version of this
hypercall in the generic code? (later on)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.