[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 REPOST 12/12] x86/hvm/ioreq: add a new mappable resource type...
On 08/25/2017 10:46 AM, Paul Durrant wrote: >> -----Original Message----- >> From: Roger Pau Monne >> Sent: 25 August 2017 10:32 >> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx> >> Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx; Stefano Stabellini >> <sstabellini@xxxxxxxxxx>; Wei Liu <wei.liu2@xxxxxxxxxx>; George Dunlap >> <George.Dunlap@xxxxxxxxxx>; Andrew Cooper >> <Andrew.Cooper3@xxxxxxxxxx>; Ian Jackson <Ian.Jackson@xxxxxxxxxx>; Tim >> (Xen.org) <tim@xxxxxxx>; Jan Beulich <jbeulich@xxxxxxxx> >> Subject: Re: [Xen-devel] [PATCH v2 REPOST 12/12] x86/hvm/ioreq: add a >> new mappable resource type... >> >> On Tue, Aug 22, 2017 at 03:51:06PM +0100, Paul Durrant wrote: >>> ... XENMEM_resource_ioreq_server >>> >>> This patch adds support for a new resource type that can be mapped using >>> the XENMEM_acquire_resource memory op. >>> >>> If an emulator makes use of this resource type then, instead of mapping >>> gfns, the IOREQ server will allocate pages from the heap. These pages >>> will never be present in the P2M of the guest at any point and so are >>> not vulnerable to any direct attack by the guest. They are only ever >>> accessible by Xen and any domain that has mapping privilege over the >>> guest (which may or may not be limited to the domain running the >> emulator). >>> >>> NOTE: Use of the new resource type is not compatible with use of >>> XEN_DMOP_get_ioreq_server_info unless the XEN_DMOP_no_gfns >> flag is >>> set. >>> >>> Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx> >>> --- >>> Cc: Jan Beulich <jbeulich@xxxxxxxx> >>> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> >>> Cc: George Dunlap <George.Dunlap@xxxxxxxxxxxxx> >>> Cc: Ian Jackson <ian.jackson@xxxxxxxxxxxxx> >>> Cc: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> >>> Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx> >>> Cc: Tim Deegan <tim@xxxxxxx> >>> Cc: Wei Liu <wei.liu2@xxxxxxxxxx> >>> --- >>> xen/arch/x86/hvm/ioreq.c | 136 >> ++++++++++++++++++++++++++++++++++++++++ >>> xen/arch/x86/mm.c | 27 ++++++++ >>> xen/include/asm-x86/hvm/ioreq.h | 2 + >>> xen/include/public/hvm/dm_op.h | 4 ++ >>> xen/include/public/memory.h | 3 + >>> 5 files changed, 172 insertions(+) >>> >>> diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c >>> index 795c198f95..9e6838dab6 100644 >>> --- a/xen/arch/x86/hvm/ioreq.c >>> +++ b/xen/arch/x86/hvm/ioreq.c >>> @@ -231,6 +231,15 @@ static int hvm_map_ioreq_gfn(struct >> hvm_ioreq_server *s, bool buf) >>> struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; >>> int rc; >>> >>> + if ( iorp->page ) >>> + { >>> + /* Make sure the page has not been allocated */ >>> + if ( gfn_eq(iorp->gfn, INVALID_GFN) ) >>> + return -EPERM; >>> + >>> + return 0; >> >> EEXIST? (See comment below, which I think also applies here). >> >>> + } >>> + >>> if ( d->is_dying ) >>> return -EINVAL; >>> >>> @@ -253,6 +262,60 @@ static int hvm_map_ioreq_gfn(struct >> hvm_ioreq_server *s, bool buf) >>> return rc; >>> } >>> >>> +static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) >>> +{ >>> + struct domain *currd = current->domain; >>> + struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; >>> + >>> + if ( iorp->page ) >>> + { >>> + /* Make sure the page has not been mapped */ >>> + if ( !gfn_eq(iorp->gfn, INVALID_GFN) ) >>> + return -EPERM; >>> + >>> + return 0; >> >> Shouldn't this return EEXIST? Page has already been allocated by a >> previous call AFAICT, and it seems like a possible error/misbehavior >> to try to do it twice. >> > > The checks are there to prevent a caller from trying to mix the legacy and > new methods of mapping ioreq server pages so EPERM (i.e. 'operation not > permitted') seems like the correct error. I agree that it's not obvious, at > this inner level, that I do think this is right. I'm open to debate about > this though. -EBUSY then? -George _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |