[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC] x86/hvm: unify HVM and PVH hypercall tables.



>>> On 08.05.14 at 17:31, <tim@xxxxxxx> wrote:
> Stage one of many in merging PVH and HVM code in the hypervisor.

But a pretty desirable one (not the least because it means I'll be able
to remove the respective item from my todo list).

> This exposes a few new hypercalls to HVM guests, all of which were
> already available to PVH ones:
> 
>  - XENMEM_memory_map / XENMEM_machine_memory_map / XENMEM_machphys_mapping:
>    These are basically harmless, if a bit useless to plain HVM.
> 
>  - VCPUOP_send_nmi / VCPUOP_initialise / VCPUOP[_is]_up / VCPUOP_down
>    This will eventually let HVM guests bring up APs the way PVH ones do.
>    For now, the VCPUOP_initialise paths are still gated on is_pvh.
>  - VCPUOP_get_physid
>    Harmless.
> 
>  - __HYPERVISOR_platform_op (XSM_PRIV callers only).

I think this needs a little more thought that just relying on the
XSM_PRIV check: There are several operations here dealing with
machine memory addresses, which aren't directly meaningful to PVH
(and HVM, but for now we're not planning on having HVM Dom0). Do
you think it is useful to expose them the way they are nevertheless?

>  - __HYPERVISOR_mmuext_op.
>    The pagetable manipulation MMUEXT ops are already denied to
>    paging_mode_refcounts() domains;

Denied? From what I can see MMUEXT_PIN_L?_TABLE as well as
MMUEXT_UNPIN_TABLE succeed (in the sense of are being ignored) for
such pg_owner domains (I consider it similarly bogus to ignore rather
than fail pin requests for page table levels higher than supported now
that I look at this again - part of that code might anyway be cleaned
up now that CONFIG_PAGING_LEVELS can only ever be 4, unless we
expect a 5th level to appear at some point).

> the baseptr ones are already
>    denied to paging_mode_translate() domains.
>    I have restricted MMUEXT[UN]MARK_SUPER to !paging_mode_refcounts()
>    domains as well, as I can see no need for them in PVH.
>    That leaves TLB and cache flush operations and MMUEXT_CLEAR_PAGE /
>    MMUEXT_COPY_PAGE, all of which are OK.

Would permitting these two not undermine at least mem-access?
(If so, I suppose we already have this problem for
PHYSDEVOP_pirq_eoi_gmfn_v* and other operations where guest
memory gets written by the hypervisor, but as being discussed on a
separate thread, that appears to be an accepted limitation. But for
the two operations above, this would give guests a way to explicitly
circumvent the access checking/enforcement, as these are really
only library functions intended to help (32-bit) guests avoid needing
to do multiple hypercalls to access memory not mapped through a
permanent mapping.)

> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -3433,16 +3433,13 @@ static long hvm_memory_op(int cmd, 
> XEN_GUEST_HANDLE_PARAM(void) arg)
>  
>      switch ( cmd & MEMOP_CMD_MASK )
>      {
> -    case XENMEM_memory_map:
> -    case XENMEM_machine_memory_map:
> -    case XENMEM_machphys_mapping:
> -        return -ENOSYS;
>      case XENMEM_decrease_reservation:
>          rc = do_memory_op(cmd, arg);
>          current->domain->arch.hvm_domain.qemu_mapcache_invalidate = 1;
>          return rc;
> +    default:
> +        return do_memory_op(cmd, arg);

I think this would be done more efficiently/cleanly with a single call
to do_memory_op(), and a successive check for
XENMEM_decrease_reservation (similarly below for the 32-bit case).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.