[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC] x86/hvm: unify HVM and PVH hypercall tables.



At 09:08 +0100 on 09 May (1399622918), Jan Beulich wrote:
> >>> On 08.05.14 at 17:31, <tim@xxxxxxx> wrote:
> >  - __HYPERVISOR_platform_op (XSM_PRIV callers only).
> 
> I think this needs a little more thought that just relying on the
> XSM_PRIV check: There are several operations here dealing with
> machine memory addresses, which aren't directly meaningful to PVH
> (and HVM, but for now we're not planning on having HVM Dom0). Do
> you think it is useful to expose them the way they are nevertheless?

I'll punt that to Mukesh: are there operations in here that a PVH
dom0 couldn't/shouldn't use or that need adjustment?

For this patch, I don't think it makes any difference; I'm just giving
HVM the same interface as PVH here.

> >  - __HYPERVISOR_mmuext_op.
> >    The pagetable manipulation MMUEXT ops are already denied to
> >    paging_mode_refcounts() domains;
> 
> Denied? From what I can see MMUEXT_PIN_L?_TABLE as well as
> MMUEXT_UNPIN_TABLE succeed (in the sense of are being ignored) for
> such pg_owner domains

Hmm, so they do.  How odd.

> (I consider it similarly bogus to ignore rather
> than fail pin requests for page table levels higher than supported now
> that I look at this again - part of that code might anyway be cleaned
> up now that CONFIG_PAGING_LEVELS can only ever be 4, unless we
> expect a 5th level to appear at some point).
> 
> > the baseptr ones are already
> >    denied to paging_mode_translate() domains.
> >    I have restricted MMUEXT[UN]MARK_SUPER to !paging_mode_refcounts()
> >    domains as well, as I can see no need for them in PVH.
> >    That leaves TLB and cache flush operations and MMUEXT_CLEAR_PAGE /
> >    MMUEXT_COPY_PAGE, all of which are OK.
> 
> Would permitting these two not undermine at least mem-access?

Good point; I'll restrict them to PV only, since they're not needed
for PVH. 

> I think this would be done more efficiently/cleanly with a single call
> to do_memory_op(), and a successive check for
> XENMEM_decrease_reservation (similarly below for the 32-bit case).

Ack, will do. 

v2 coming shortly. 

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.