[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH v8 1/8] xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing
From: Zhongze Liu <blackskygg@xxxxxxxxx> Author: Zhongze Liu <blackskygg@xxxxxxxxx> The existing XENMAPSPACE_gmfn_foreign subop of XENMEM_add_to_physmap forbids a Dom0 to map memory pages from one DomU to another, which restricts some useful yet not dangerous use cases -- such as sharing pages among DomU's so that they can do shm-based communication. This patch introduces XENMAPSPACE_gmfn_share to address this inconvenience, which is mostly the same as XENMAPSPACE_gmfn_foreign but has its own xsm check. Specifically, the patch: * Introduces a new av permission MMU__SHARE_MEM to denote if two domains can share memory by using the new subop; * Introduces xsm_map_gmfn_share() to check if (current) has proper permission over (t) AND MMU__SHARE_MEM is allowed between (d) and (t); * Modify the default xen.te to allow MMU__SHARE_MEM for normal domains that allow grant mapping/event channels. The new subop is marked unsupported for x86 because calling p2m_add_foregin on two DomU's is currently not supported on x86. This is for the proposal "Allow setting up shared memory areas between VMs from xl config file" (see [1]). [1] https://lists.xen.org/archives/html/xen-devel/2017-08/msg03242.html Signed-off-by: Zhongze Liu <blackskygg@xxxxxxxxx> Signed-off-by: Stefano Stabellini <stefanos@xxxxxxxxxx> Cc: Daniel De Graaf <dgdegra@xxxxxxxxxxxxx> Cc: Ian Jackson <ian.jackson@xxxxxxxxxxxxx> Cc: Wei Liu <wei.liu2@xxxxxxxxxx> Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx> Cc: Julien Grall <julien.grall@xxxxxxx> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Cc: George Dunlap <George.Dunlap@xxxxxxxxxxxxx> Cc: Jan Beulich <jbeulich@xxxxxxxx> Cc: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> Cc: Tim Deegan <tim@xxxxxxx> Cc: xen-devel@xxxxxxxxxxxxx --- Changes in v8: - typo Changes in v7: - add additional checks - update comments to reflect that Changes in v5: - fix coding style - remove useless x86 hypervisor message for the unimplemented op - code style - improve/add comments --- tools/flask/policy/modules/xen.if | 2 ++ xen/arch/arm/mm.c | 7 ++++++- xen/include/public/memory.h | 8 ++++++++ xen/include/xsm/dummy.h | 14 ++++++++++++++ xen/include/xsm/xsm.h | 6 ++++++ xen/xsm/dummy.c | 1 + xen/xsm/flask/hooks.c | 9 +++++++++ xen/xsm/flask/policy/access_vectors | 5 +++++ 8 files changed, 51 insertions(+), 1 deletion(-) diff --git a/tools/flask/policy/modules/xen.if b/tools/flask/policy/modules/xen.if index 4e06cfc..b0ab089 100644 --- a/tools/flask/policy/modules/xen.if +++ b/tools/flask/policy/modules/xen.if @@ -128,6 +128,8 @@ define(`domain_comms', ` domain_event_comms($1, $2) allow $1 $2:grant { map_read map_write copy unmap }; allow $2 $1:grant { map_read map_write copy unmap }; + allow $1 $2:mmu share_mem; + allow $2 $1:mmu share_mem; ') # domain_self_comms(domain) diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index 7a06a33..7b7869f 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -1245,6 +1245,7 @@ int xenmem_add_to_physmap_one( break; case XENMAPSPACE_gmfn_foreign: + case XENMAPSPACE_gmfn_share: { struct domain *od; p2m_type_t p2mt; @@ -1259,7 +1260,11 @@ int xenmem_add_to_physmap_one( return -EINVAL; } - rc = xsm_map_gmfn_foreign(XSM_TARGET, d, od); + if ( space == XENMAPSPACE_gmfn_foreign ) + rc = xsm_map_gmfn_foreign(XSM_TARGET, d, od); + else + rc = xsm_map_gmfn_share(XSM_TARGET, d, od); + if ( rc ) { rcu_unlock_domain(od); diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h index 8fc27ce..631d10e 100644 --- a/xen/include/public/memory.h +++ b/xen/include/public/memory.h @@ -227,6 +227,14 @@ DEFINE_XEN_GUEST_HANDLE(xen_machphys_mapping_t); Stage-2 using the Normal Memory Inner/Outer Write-Back Cacheable memory attribute. */ +#define XENMAPSPACE_gmfn_share 6 /* GMFN from another dom, + XENMEM_add_to_physmap_batch (and + currently ARM) only. Unlike + XENMAPSPACE_gmfn_foreign, it + requires current to have mapping + privileges instead of the + destination domain. */ + /* ` } */ /* diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h index b0ac1f6..dd99593 100644 --- a/xen/include/xsm/dummy.h +++ b/xen/include/xsm/dummy.h @@ -535,6 +535,20 @@ static XSM_INLINE int xsm_map_gmfn_foreign(XSM_DEFAULT_ARG struct domain *d, str return xsm_default_action(action, d, t); } +/* + * Be aware that this is not an exact default equivalence of its flask + * variant which also checks if @d and @t "are allowed to share memory + * pages", for now, we don't have a proper default equivalence of such a + * check. + */ +static XSM_INLINE int xsm_map_gmfn_share(XSM_DEFAULT_ARG struct domain *d, + struct domain *t) +{ + XSM_ASSERT_ACTION(XSM_TARGET); + return xsm_default_action(XSM_TARGET, current->domain, d) ?: + xsm_default_action(action, current->domain, t); +} + static XSM_INLINE int xsm_hvm_param(XSM_DEFAULT_ARG struct domain *d, unsigned long op) { XSM_ASSERT_ACTION(XSM_TARGET); diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h index 3d67962..96edfeb 100644 --- a/xen/include/xsm/xsm.h +++ b/xen/include/xsm/xsm.h @@ -86,6 +86,7 @@ struct xsm_operations { int (*add_to_physmap) (struct domain *d1, struct domain *d2); int (*remove_from_physmap) (struct domain *d1, struct domain *d2); int (*map_gmfn_foreign) (struct domain *d, struct domain *t); + int (*map_gmfn_share) (struct domain *d, struct domain *t); int (*claim_pages) (struct domain *d); int (*console_io) (struct domain *d, int cmd); @@ -376,6 +377,11 @@ static inline int xsm_map_gmfn_foreign (xsm_default_t def, struct domain *d, str return xsm_ops->map_gmfn_foreign(d, t); } +static inline int xsm_map_gmfn_share (xsm_default_t def, struct domain *d, struct domain *t) +{ + return xsm_ops->map_gmfn_share(d, t); +} + static inline int xsm_claim_pages(xsm_default_t def, struct domain *d) { return xsm_ops->claim_pages(d); diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c index 3290d04..db5f3ec 100644 --- a/xen/xsm/dummy.c +++ b/xen/xsm/dummy.c @@ -124,6 +124,7 @@ void __init xsm_fixup_ops (struct xsm_operations *ops) set_to_dummy_if_null(ops, add_to_physmap); set_to_dummy_if_null(ops, remove_from_physmap); set_to_dummy_if_null(ops, map_gmfn_foreign); + set_to_dummy_if_null(ops, map_gmfn_share); set_to_dummy_if_null(ops, vm_event_control); diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c index 6da2773..bff48f8 100644 --- a/xen/xsm/flask/hooks.c +++ b/xen/xsm/flask/hooks.c @@ -1192,6 +1192,14 @@ static int flask_map_gmfn_foreign(struct domain *d, struct domain *t) return domain_has_perm(d, t, SECCLASS_MMU, MMU__MAP_READ | MMU__MAP_WRITE); } +static int flask_map_gmfn_share(struct domain *d, struct domain *t) +{ + if ( current_has_perm(d, SECCLASS_MMU, MMU__MAP_READ | MMU__MAP_WRITE) ) + return rc; + return current_has_perm(t, SECCLASS_MMU, MMU__MAP_READ | MMU__MAP_WRITE) ?: + domain_has_perm(d, t, SECCLASS_MMU, MMU__SHARE_MEM); +} + static int flask_hvm_param(struct domain *d, unsigned long op) { u32 perm; @@ -1816,6 +1824,7 @@ static struct xsm_operations flask_ops = { .add_to_physmap = flask_add_to_physmap, .remove_from_physmap = flask_remove_from_physmap, .map_gmfn_foreign = flask_map_gmfn_foreign, + .map_gmfn_share = flask_map_gmfn_share, #if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI) .get_device_group = flask_get_device_group, diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors index d01a7a0..6fe4c8e 100644 --- a/xen/xsm/flask/policy/access_vectors +++ b/xen/xsm/flask/policy/access_vectors @@ -381,6 +381,11 @@ class mmu # Allow a privileged domain to install a map of a page it does not own. Used # for stub domain device models with the PV framebuffer. target_hack +# Checked when using XENMEM_add_to_physmap with XENMAPSPACE_gmfn_share +# to share memory between two domains: +# source = domain whose memory is being shared +# target = client domain + share_mem } # control of the paging_domctl split by subop -- 1.9.1 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |